论文标题
另一个事实解释 - 无关紧要与解释AI系统的相关性
Alterfactual Explanations -- The Relevance of Irrelevance for Explaining AI Systems
论文作者
论文摘要
反事实思维领域的解释机制是可解释人工智能(XAI)的广泛使用的范式,因为它们遵循一种自然的推理方式,即人类熟悉。但是,该领域的所有常见方法都是基于传达有关特征或特征的信息,这些信息对于AI的决定尤为重要。我们认为,为了充分理解决定,不仅需要有关相关特征的知识,而且对无关信息的意识也很大程度上有助于创建用户对AI系统的心理模型。因此,我们介绍了一种解释AI系统的新方法。我们称之为另一个事实解释的方法是基于显示AI输入的不相关特征的替代现实。通过这样做,用户直接看到输入数据的哪些特征可以随意更改而不会影响AI的决定。我们在广泛的用户研究中评估了我们的方法,表明它能够显着有助于参与者对AI的理解。我们表明,与既定的反事实解释方法相比,改变的解释适合传达对AI推理不同方面的理解。
Explanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, all common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI's decision. We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system. Therefore, we introduce a new way of explaining AI systems. Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered. By doing so, the user directly sees which characteristics of the input data can change arbitrarily without influencing the AI's decision. We evaluate our approach in an extensive user study, revealing that it is able to significantly contribute to the participants' understanding of an AI. We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.