论文标题
具有非负概念激活向量的CNN模型的基于概念的可逆解释
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
论文作者
论文摘要
用于计算机视觉的卷积神经网络(CNN)模型具有强大的功能,但以其最基本的形式缺乏解释性。在重要领域应用CNN时,这种缺陷仍然是一个关键的挑战。通过近似线性模型的特征重要性解释的最新研究已从输入级特征(像素或段)转变为以概念激活向量(CAVS)形式的中层特征图的特征。骑士包含概念级信息,可以通过聚类来学习。在这项工作中,我们重新考虑了Ghorbani等人的ACE算法,提出了一个替代性可逆的基于概念的解释(ICE)框架来克服其缺点。根据忠诚度(对目标模型的近似模型)和解释性(对人有意义)的要求,我们通过框架设计了测量并评估一系列矩阵分解方法。我们发现,基于计算和人类主题实验的非负基质分解的非负概念激活向量(NCAV)在可解释性和忠诚度方面提供了卓越的性能。我们的框架为预训练的CNN模型提供了本地和全球概念级别的解释。
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs in important domains. Recent work on explanations through feature importance of approximate linear models has moved from input-level features (pixels or segments) to features from mid-layer feature maps in the form of concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we rethink the ACE algorithm of Ghorbani et~al., proposing an alternative invertible concept-based explanation (ICE) framework to overcome its shortcomings. Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework. We find that non-negative concept activation vectors (NCAVs) from non-negative matrix factorization provide superior performance in interpretability and fidelity based on computational and human subject experiments. Our framework provides both local and global concept-level explanations for pre-trained CNN models.