论文标题

有条件公平的算法决策

Algorithmic Decision Making with Conditional Fairness

论文作者

Xu, Renzhe, Cui, Peng, Kuang, Kun, Li, Bo, Zhou, Linjun, Shen, Zheyan, Cui, Wei

论文摘要

如今,公平性问题引起了决策系统的极大关注。已经提出了各种公平的概念来衡量算法不公平的程度。在实践中,我们经常有一组一组变量,我们称为公平变量,这些变量是用户选择等决定前的协变量。公平变量的影响与评估决策支持算法的公平性无关。因此,我们将条件公平定义为通过对公平变量进行调节,将条件公平定义为更合理的公平度量。鉴于对公平变量的先验知识不同,我们证明传统的公平符号(例如人口统计学和均衡赔率)是我们条件公平符号的特殊情况。此外,我们提出了一个可衍生的条件公平规则(DCFR),可以将其集成到任何决策模型中,以跟踪算法决策的精确度和公平性之间的权衡。具体而言,在我们的DCFR中提出了基于对抗性的条件独立性损失,以衡量不公平程度。通过在三个现实世界数据集上进行广泛的实验,我们证明了条件公平符号和DCFR的优势。

Nowadays fairness issues have raised great concerns in decision-making systems. Various fairness notions have been proposed to measure the degree to which an algorithm is unfair. In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices. The effects of fair variables are irrelevant in assessing the fairness of the decision support algorithm. We thus define conditional fairness as a more sound fairness metric by conditioning on the fairness variables. Given different prior knowledge of fair variables, we demonstrate that traditional fairness notations, such as demographic parity and equalized odds, are special cases of our conditional fairness notations. Moreover, we propose a Derivable Conditional Fairness Regularizer (DCFR), which can be integrated into any decision-making model, to track the trade-off between precision and fairness of algorithmic decision making. Specifically, an adversarial representation based conditional independence loss is proposed in our DCFR to measure the degree of unfairness. With extensive experiments on three real-world datasets, we demonstrate the advantages of our conditional fairness notation and DCFR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源