论文标题

重新考虑图形神经网络的图形正则化

Rethinking Graph Regularization for Graph Neural Networks

论文作者

Yang, Han, Ma, Kaili, Cheng, James

论文摘要

Graph Laplacian正则化项通常用于半监督表示的学习中,以提供$ F(x)$的图形结构信息。但是,随着图形神经网络(GNN)的最新流行,将图形结构$ a $直接编码到模型中,即$ f(a,x)$,已成为更常见的方法。尽管我们表明图形拉普拉斯正则化对现有的GNN几乎没有任何好处,并提出了一个简单但非平凡的laplacian正则化变体,称为传播指导(P-REG),以提高现有GNN模型的性能。我们提供了正式的分析,以表明P-REG不仅注入了额外的信息(传统的图形拉普拉斯正规化未捕获)到GNN中,而且具有等于无限深度图卷积网络的能力。我们证明,P-REG可以有效地提高在许多不同数据集中节点级和图形任务上现有GNN模型的性能。

The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model $f(X)$. However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure $A$ into a model, i.e., $f(A, X)$, has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks across many different datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源