论文标题

Blackbox的多类公平后期处理

Blackbox Post-Processing for Multiclass Fairness

论文作者

Putzel, Preston, Lee, Scott

论文摘要

应用标准的机器学习方法进行分类可能会在不同的人群群体之间产生不平等的结果。在现实世界中使用时,这些不平等可能会产生负面影响。这促使近年来通过机器学习模型开发了各种公平分类的方法。在本文中,我们考虑了修改Blackbox机器学习分类器的预测以在多类环境中实现公平性的问题。为此,我们在Hardt等人中扩展了“后处理”方法。 2016年,侧重于二进制分类的公平性,以设定公平的多类分类。我们探讨了我们的方法通过系统的合成实验产生公平和准确的预测,并评估几个公开可用的现实世界应用程序数据集上的歧视 - 财产折衷。我们发现,当数据集中的个体数量相对于类和受保护的群体的数量很高时,我们的方法总体上会产生较小的准确性下降并实现公平性。

Applying standard machine learning approaches for classification can produce unequal results across different demographic groups. When then used in real-world settings, these inequities can have negative societal impacts. This has motivated the development of various approaches to fair classification with machine learning models in recent years. In this paper, we consider the problem of modifying the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting. To accomplish this, we extend the 'post-processing' approach in Hardt et al. 2016, which focuses on fairness for binary classification, to the setting of fair multiclass classification. We explore when our approach produces both fair and accurate predictions through systematic synthetic experiments and also evaluate discrimination-fairness tradeoffs on several publicly available real-world application datasets. We find that overall, our approach produces minor drops in accuracy and enforces fairness when the number of individuals in the dataset is high relative to the number of classes and protected groups.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源