论文标题
可解释的人工智能(XAI)中的机遇和挑战:一项调查
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
论文作者
论文摘要
如今,深度神经网络被广泛用于任务关键系统,例如医疗保健,自动驾驶汽车和对人类生命的直接影响。但是,深神经网络的黑盒本质挑战了其在任务关键应用中的使用,从而提出了伦理和司法问题,从而引起了缺乏信任。可解释的人工智能(XAI)是一个人工智能(AI)的领域,该领域促进了一组工具,技术和算法,可以产生高质量的可解释,直觉,人为理解的AI决定的解释。除了对深度学习中当前的XAI景观提供整体视图外,本文还提供了数学摘要的开创性工作。我们首先提出分类法,并根据其解释范围,算法背后的方法以及解释级别或用法对XAI技术进行分类,从而有助于建立可信赖,可解释和自我解释的深度学习模型。然后,我们描述了XAI研究中使用的主要原理,并介绍了2007年至2020年在XAI的地标研究的历史时间表。在详细解释了每种类别的算法和方法之后,我们评估了八个XAI算法在图像数据上产生的解释图,并讨论该方法的限制,并提供潜在的未来方向以改善XAI评估。
Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives. However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust. Explainable Artificial Intelligence (XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions. In addition to providing a holistic view of the current XAI landscape in deep learning, this paper provides mathematical summaries of seminal work. We start by proposing a taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models. We then describe the main principles used in XAI research and present the historical timeline for landmark studies in XAI from 2007 to 2020. After explaining each category of algorithms and approaches in detail, we then evaluate the explanation maps generated by eight XAI algorithms on image data, discuss the limitations of this approach, and provide potential future directions to improve XAI evaluation.