论文标题
责任:通过培训过程检查基于示例的可解释的AI方法
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
论文作者
论文摘要
可解释的人工智能(XAI)方法旨在帮助人类用户更好地了解AI代理的决策。但是,许多现代的XAI方法对最终用户,尤其是那些没有先前AI或ML知识的用户都不纯粹。在本文中,我们提出了一种新颖的XAI方法,我们称为责任,标识了特定决定的最负责任的培训示例。然后可以将此示例显示为一个解释:“这是我(AI)学到的使我这样做的东西”。我们介绍了许多领域的实验结果,以及亚马逊机械Turk用户研究的结果,比较了责任和图像分类任务上的现有XAI方法。我们的结果表明,责任可以帮助提高人类最终用户和二级ML模型的准确性。
Explainable Artificial Intelligence (XAI) methods are intended to help human users better understand the decision making of an AI agent. However, many modern XAI approaches are unintuitive to end users, particularly those without prior AI or ML knowledge. In this paper, we present a novel XAI approach we call Responsibility that identifies the most responsible training example for a particular decision. This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that". We present experimental results across a number of domains along with the results of an Amazon Mechanical Turk user study, comparing responsibility and existing XAI methods on an image classification task. Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.