论文标题
GSTARX:用结构吸引合作游戏解释图形神经网络
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
论文作者
论文摘要
解释机器学习模型是研究兴趣的重要且日益流行的领域。 Game Theory的Shapley价值已被提出是一种主要方法,以计算图像,文本,表格数据以及最近图形的图形神经网络(GNN)的模型预测的重要性。在这项工作中,我们重新审视了GNN解释的Shapley价值的适当性,其中任务是确定GNN预测的最重要的子图和组成节点。我们声称Shapley值是图形数据的非理想选择,因为从定义上讲,它不是结构感知。我们提出了一种图形结构解释(GSTARX)方法,以利用关键的图形结构信息来改善解释。具体而言,我们根据Hamiache和Navarro(HN)提出的合作游戏理论的新结构感知值定义了评分函数。当用于评分节点的重要性时,HN值会利用图形结构来归因邻居节点之间的合作盈余,类似于GNN中传递的消息,因此节点的重要性得分不仅反映了节点特征的重要性,还反映了节点结构的角色。我们证明,Gstarx产生了更直观的解释,并定量改善了关于化学图属性预测和文本图情感分类的强大基准的解释保真度。
Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification.