论文标题
神经网络以外的归因:线性程序案例
Attributions Beyond Neural Networks: The Linear Program Case
论文作者
论文摘要
线性程序(LPS)一直是机器学习中的基础之一,并在学习系统的可区分优化器中倡导了最近的进步。虽然有求解器甚至是高维LP的求解器,但理解高维解决方案带来了正交和未解决的问题。我们介绍了一种方法,我们考虑了LP的神经编码,这些神经编码证明了为神经学习系统设计的可解释人工智能(XAI)的归因方法的应用。我们提出的几个编码功能都考虑到了方面,例如决策空间的可行性,每个输入的成本或与特殊点的距离。我们研究了几种XAI方法对所述神经LP编码的数学后果。我们从经验上表明,归因方法的显着性和石灰揭示了扰动水平的无法区分的结果,并且我们提出了定向性的特性,这是一方面显着性和石灰之间的主要判别标准,另一方面是基于扰动的特征置换率的方法。定向性指示归因方法是否给出了该功能增加的特征归因。我们进一步注意到集成梯度的经典计算机视觉设置之外的基线选择问题。
Linear Programs (LPs) have been one of the building blocks in machine learning and have championed recent strides in differentiable optimizers for learning systems. While there exist solvers for even high-dimensional LPs, understanding said high-dimensional solutions poses an orthogonal and unresolved problem. We introduce an approach where we consider neural encodings for LPs that justify the application of attribution methods from explainable artificial intelligence (XAI) designed for neural learning systems. The several encoding functions we propose take into account aspects such as feasibility of the decision space, the cost attached to each input, or the distance to special points of interest. We investigate the mathematical consequences of several XAI methods on said neural LP encodings. We empirically show that the attribution methods Saliency and LIME reveal indistinguishable results up to perturbation levels, and we propose the property of Directedness as the main discriminative criterion between Saliency and LIME on one hand, and a perturbation-based Feature Permutation approach on the other hand. Directedness indicates whether an attribution method gives feature attributions with respect to an increase of that feature. We further notice the baseline selection problem beyond the classical computer vision setting for Integrated Gradients.