论文标题
通过干预代理特征的干预识别公平的视觉识别
Fair Visual Recognition via Intervention with Proxy Features
论文作者
论文摘要
深度学习模型通常会学会做出依靠性别和种族等敏感的社会属性的预测,这带来了明显的公平风险,尤其是在社会应用中,例如招聘,银行业和刑事司法。现有工作通过最大程度地限制有关依据模型中有关社会属性的信息来解决此问题。但是,目标任务和社交属性之间的高度相关性使缓解偏差与目标任务准确性不相容。 Recalling that model bias arises because the learning of features in regard to bias attributes (i.e., bias features) helps target task optimization, we explore the following research question: \emph{Can we leverage proxy features to replace the role of bias feature in target task optimization for debiasing?} To this end, we propose \emph{Proxy Debiasing}, to first transfer the target task's learning of bias information from bias features to人工代理特征,然后采用因果干预来消除推理中的代理特征。 \ emph {proxy Debiasing}的关键思想是设计可控的代理功能,以一方面替换偏差功能在训练阶段促进目标任务时,另一方面可以轻松地通过推理阶段的干预来删除。这保证了消除偏见特征而不会影响目标信息,从而解决了以前的偏见解决方案中的公平性准确性悖论。我们将\ emph {proxy Deriasing}应用于几个基准数据集,并在准确性和公平性方面对最先进的偏见方法实现了重大改进。
Deep learning models often learn to make predictions that rely on sensitive social attributes like gender and race, which poses significant fairness risks, especially in societal applications, e.g., hiring, banking, and criminal justice. Existing work tackles this issue by minimizing information about social attributes in models for debiasing. However, the high correlation between target task and social attributes makes bias mitigation incompatible with target task accuracy. Recalling that model bias arises because the learning of features in regard to bias attributes (i.e., bias features) helps target task optimization, we explore the following research question: \emph{Can we leverage proxy features to replace the role of bias feature in target task optimization for debiasing?} To this end, we propose \emph{Proxy Debiasing}, to first transfer the target task's learning of bias information from bias features to artificial proxy features, and then employ causal intervention to eliminate proxy features in inference. The key idea of \emph{Proxy Debiasing} is to design controllable proxy features to on one hand replace bias features in contributing to target task during the training stage, and on the other hand easily to be removed by intervention during the inference stage. This guarantees the elimination of bias features without affecting the target information, thus addressing the fairness-accuracy paradox in previous debiasing solutions. We apply \emph{Proxy Debiasing} to several benchmark datasets, and achieve significant improvements over the state-of-the-art debiasing methods in both of accuracy and fairness.