论文标题

隐式图神经网络

Implicit Graph Neural Networks

论文作者

Gu, Fangda, Chang, Heng, Zhu, Wenwu, Sojoudi, Somayeh, Ghaoui, Laurent El

论文摘要

图形神经网络(GNN)是广泛使用的深度学习模型,可从图形结构数据中学习有意义的表示。由于基础复发结构的有限性质,当前的GNN方法可能难以捕获基础图中的远程依赖性。为了克服这一难度,我们提出了一个称为隐式图神经网络(IGNN)的图形学习框架,其中预测基于涉及隐式定义的“状态”向量的定点平衡方程的解决方案。我们使用Perron-Frobenius理论来得出足够的条件,以确保框架的适当性。利用隐式差异化,我们得出了一种可拖动的投影梯度下降方法来训练框架。在全面的任务上进行的实验表明,IGNN始终捕获长期依赖性并胜过最先进的GNN模型。

Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform the state-of-the-art GNN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源