论文标题
可解释性的神经基础模型
Neural Basis Models for Interpretability
论文作者
论文摘要
由于在现实世界应用中广泛使用复杂的机器学习模型,解释模型预测变得至关重要。但是,这些模型通常是黑盒深神经网络,通过具有已知忠诚限制的方法来解释事后。广义添加剂模型(GAM)是一种固有的可解释的模型类,通过分别学习每个功能的非线性形状函数来解决此限制,然后在顶部进行线性模型。但是,这些模型通常很难训练,需要许多参数,并且难以扩展。我们提出了一个全新的游戏,其中利用了形状函数的基础分解。在所有功能之间共享少数基础函数,并共同学习给定任务,从而使我们的模型比例更好地到具有高维功能的大规模数据,尤其是当功能稀疏时。我们提出了一种表示是神经基依据(NBM)的体系结构,该模型使用单个神经网络来学习这些基础。在各种表格和图像数据集上,我们证明,对于可解释的机器学习,NBMS是准确性,模型大小和吞吐量的最先进,并且可以轻松建模所有高阶特征交互。 源代码可在https://github.com/facebookresearch/nbm-pam上获得。
Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale. We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions. Source code is available at https://github.com/facebookresearch/nbm-spam.