论文标题

统一图卷积神经网络和标签传播

Unifying Graph Convolutional Neural Networks and Label Propagation

论文作者

Wang, Hongwei, Leskovec, Jure

论文摘要

标签传播(LPA)和图形卷积神经网络(GCN)都是传递图上算法的消息。两者都解决了节点分类的任务,但LPA在图形边缘传播节点标签信息,而GCN则传播和转换节点特征信息。但是,尽管从概念上相似,但尚未研究LPA和GCN之间的理论关系。在这里,我们根据两个方面研究LPA和GCN之间的关系:(1)特征/标签平滑,我们分析一个节点的特征/标签如何分布在其邻居上; (2)一个节点的初始特征/标签的特征/标签影响有多少影响另一个节点的最终功能/标签。基于我们的理论分析,我们提出了一个端到端模型,该模型将GCN和LPA统一进行节点分类。在我们的统一模型中,边缘权重可以学习,LPA作为正规化,以帮助GCN学习适当的边缘权重,从而改善分类性能。我们的模型也可以看作是基于节点标签的学习注意力,这比现有基于功能的注意模型更注重任务。在现实图形的许多实验中,就节点分类精度而言,我们的模型在基于GCN的最新方法上表现出优越性。

Label Propagation (LPA) and Graph Convolutional Neural Networks (GCN) are both message passing algorithms on graphs. Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information. However, while conceptually similar, theoretical relation between LPA and GCN has not yet been investigated. Here we study the relationship between LPA and GCN in terms of two aspects: (1) feature/label smoothing where we analyze how the feature/label of one node is spread over its neighbors; And, (2) feature/label influence of how much the initial feature/label of one node influences the final feature/label of another node. Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification. In our unified model, edge weights are learnable, and the LPA serves as regularization to assist the GCN in learning proper edge weights that lead to improved classification performance. Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models. In a number of experiments on real-world graphs, our model shows superiority over state-of-the-art GCN-based methods in terms of node classification accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源