论文标题
视觉审核师:用于检测和汇总模型偏差的交互式可视化
Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
论文作者
论文摘要
随着机器学习(ML)系统变得越来越普遍,有必要在部署之前审核这些系统的偏见。最近的研究开发了算法,以有效地以可解释的,表现不佳的数据(或切片)的形式有效地识别相互偏见。但是,这些解决方案及其见解是有限的,而无需在视觉上理解和与这些算法的结果相互作用的工具。我们提出了Visual Auditor,这是一种交互式可视化工具,用于审核和汇总模型偏见。视觉审核员通过提供相互偏见的可解释概述(检查由多个特征定义的人群,有问题的数据切片之间的关系以及在模型中表现不佳和表现表现不佳之间的比较之间的详细信息)来协助模型验证。我们的开源工具直接在计算笔记本和Web浏览器中运行,使模型审核可访问并容易集成到当前的ML开发工作流中。一项与Fiddler AI的域专家合作的观察用户研究强调,我们的工具可以帮助ML实践者识别和理解模型偏见。
As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underperforming subsets (or slices) of the data. However, these solutions and their insights are limited without a tool for visually understanding and interacting with the results of these algorithms. We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases. Visual Auditor assists model validation by providing an interpretable overview of intersectional bias (bias that is present when examining populations defined by multiple features), details about relationships between problematic data slices, and a comparison between underperforming and overperforming data slices in a model. Our open-source tool runs directly in both computational notebooks and web browsers, making model auditing accessible and easily integrated into current ML development workflows. An observational user study in collaboration with domain experts at Fiddler AI highlights that our tool can help ML practitioners identify and understand model biases.