论文标题
上下文有关公平性 - 关于空间分布变化影响的案例研究
Context matters for fairness -- a case study on the effect of spatial distribution shifts
论文作者
论文摘要
随着基于数据驱动的AI决策技术在我们日常的社交生活中的越来越多,这些系统的公平性正成为一种关键现象。但是,利用此类系统的一个重要且通常充满挑战的方面是区分其应用程序范围的有效性,尤其是在分配变化下,即当模型部署在与培训集不同的数据上时。在本文中,我们介绍了一项案例研究,该案例研究是对流行成人数据集的重建的新发布的美国人口普查数据集,以说明上下文对公平性的重要性,并表明空间分布转移如何影响模型的预测性和公平性相关的性能。公平感知的学习模型的问题仍然存在着上下文特定的公平干预措施的影响,各州和不同的人口群体不同。我们的研究表明,在将模型部署到另一个环境之前,必须进行分配变化的鲁棒性。
With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon. However, an important and often challenging aspect in utilizing such systems is to distinguish validity for the range of their application especially under distribution shifts, i.e., when a model is deployed on data with different distribution than the training set. In this paper, we present a case study on the newly released American Census datasets, a reconstruction of the popular Adult dataset, to illustrate the importance of context for fairness and show how remarkably can spatial distribution shifts affect predictive- and fairness-related performance of a model. The problem persists for fairness-aware learning models with the effects of context-specific fairness interventions differing across the states and different population groups. Our study suggests that robustness to distribution shifts is necessary before deploying a model to another context.