论文标题
算法决策中公平和透明度的以人为本的观点
A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making
论文作者
论文摘要
自动化决策系统(AD)越来越多地用于结果决策。这些系统通常依赖于复杂但不透明的机器学习模型,这些模型不允许理解给定的决定的产生。从法律的角度来看,这不仅是有问题的,而且非透明系统也很容易产生不公平的结果,因为首先,他们的理智是在评估和校准的挑战 - 这对于人类决策对象尤其令人担忧。基于这一观察结果和基于现有工作的基础,我的目标是通过我的博士学位论文做出以下三个主要贡献:(a)与人类做出的类似决定相比,(潜在的)决策对象如何感知算法的决策(具有不同程度的基础广告的透明度); (b)评估不同的工具,以使其在使人们能够适当评估广告的质量和公平性方面的有效性方面进行透明决策; (c)为公平的自动决策制定人类理解的技术文物。在我的博士学位课程上半年的整个过程中,我已经解决了(a)和(c)的大量内容,而(b)将是下半年的主要重点。
Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place -- which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.