论文标题

flowx:通过消息流朝着可解释的图形神经网络

FlowX: Towards Explainable Graph Neural Networks via Message Flows

论文作者

Gui, Shurui, Yuan, Hao, Wang, Jie, Lao, Qicheng, Li, Kang, Ji, Shuiwang

论文摘要

我们研究了图神经网络(GNN)的解释性,作为阐明其工作机制的一步。尽管大多数当前方法都集中在解释图节点,边缘或功能上,但我们认为,作为GNNS的固有功能机制,消息流对执行解释性更为自然。为此,我们在这里提出了一种新颖的方法(称为Flowx)来通过识别重要的消息流来解释GNN。为了量化流量的重要性,我们建议遵循合作游戏理论中莎普利价值观的哲学。为了解决计算所有联盟边际贡献的复杂性,我们提出了一种流程采样方案,以计算沙普利价值近似值作为进一步培训的初步评估。然后,我们提出了一种信息控制的学习算法,以训练流量分数以各种解释目标:必要或充分的解释。关于合成和现实世界数据集的实验研究表明,我们提出的FlowX及其变体可提高GNN的解释性。该代码可在https://github.com/divelab/dig上找到。

We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms. While most current methods focus on explaining graph nodes, edges, or features, we argue that, as the inherent functional mechanism of GNNs, message flows are more natural for performing explainability. To this end, we propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows. To quantify the importance of flows, we propose to follow the philosophy of Shapley values from cooperative game theory. To tackle the complexity of computing all coalitions' marginal contributions, we propose a flow sampling scheme to compute Shapley value approximations as initial assessments of further training. We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets: necessary or sufficient explanations. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed FlowX and its variants lead to improved explainability of GNNs. The code is available at https://github.com/divelab/DIG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源