论文标题
公平数据驱动决策的不确定性意识到的预测建模
Uncertainty-aware predictive modeling for fair data-driven decisions
论文作者
论文摘要
行业和学术界都在开发值得信赖和负责任的机器学习(ML)系统方面取得了长足的进步。尽管经常解决诸如公平性和解释性之类的关键概念,但系统的安全通常不够考虑。通过将数据驱动的决策系统视为社会技术系统,我们利用ML文献中的不确定性来展示Fairml系统也可以成为SAFEML系统。我们认为,公平模型需要是不确定性感知的模型,例如通过利用分配回归。对于公正的决定,我们认为应将安全失败选项用于不确定分类的人。我们将半结构化的深层分配回归作为建模框架,该框架解决了针对标准ML模型引起的多个问题,并在现实世界中的算法分析示例中显示了其用途。
Both industry and academia have made considerable progress in developing trustworthy and responsible machine learning (ML) systems. While critical concepts like fairness and explainability are often addressed, the safety of systems is typically not sufficiently taken into account. By viewing data-driven decision systems as socio-technical systems, we draw on the uncertainty in ML literature to show how fairML systems can also be safeML systems. We posit that a fair model needs to be an uncertainty-aware model, e.g. by drawing on distributional regression. For fair decisions, we argue that a safe fail option should be used for individuals with uncertain categorization. We introduce semi-structured deep distributional regression as a modeling framework which addresses multiple concerns brought against standard ML models and show its use in a real-world example of algorithmic profiling of job seekers.