论文标题
有关监督机器学习的解释性的调查
A Survey on the Explainability of Supervised Machine Learning
论文作者
论文摘要
例如,人工神经网络获得的预测具有很高的精度,但人类通常将模型视为黑匣子。关于决策的见解对于人类来说主要是不透明的。特别是了解医疗保健或Fifinance等高度敏感领域的决策至关重要。黑匣子背后的决策要求它对人类更加透明,负责和可理解。本调查论文提供了基本定义,概述了可解释的监督机器学习(SML)的不同原理和方法。我们进行了一项最新的调查,该调查回顾了过去和最近的可解释的SML方法,并根据引入的定义对它们进行了分类。最后,我们通过解释性案例研究来说明原则,并讨论重要的未来方向。
Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or fifinance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.