论文标题
网络修剪的公平损失功能
A Fair Loss Function for Network Pruning
论文作者
论文摘要
模型修剪可以使神经网络在具有资源限制的环境中部署。虽然修剪可能会对模型的整体性能产生较小的影响,但它可能会加剧现有的偏见,使得样品子集的性能显着降低。在本文中,我们介绍了性能加权损失函数,这是一个简单的修改跨透明损失函数,可用于限制修剪过程中偏见的引入。使用Celeba,Fitzpatrick17K和CIFAR-10数据集的实验表明,所提出的方法是一种简单有效的工具,可以使现有的修剪方法能够在公平敏感的环境中使用。可以在https://github.com/robbiemeyer/pw_loss_pruning上找到用于生成本文包含的所有实验的代码。
Model pruning can enable the deployment of neural networks in environments with resource constraints. While pruning may have a small effect on the overall performance of the model, it can exacerbate existing biases into the model such that subsets of samples see significantly degraded performance. In this paper, we introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning. Experiments using the CelebA, Fitzpatrick17k and CIFAR-10 datasets demonstrate that the proposed method is a simple and effective tool that can enable existing pruning methods to be used in fairness sensitive contexts. Code used to produce all experiments contained in this paper can be found at https://github.com/robbiemeyer/pw_loss_pruning.