论文标题
从图形信号降级角度了解图形神经网络
Understanding Graph Neural Networks from Graph Signal Denoising Perspectives
论文作者
论文摘要
图形神经网络(GNN)引起了很多关注,因为它们在诸如节点分类等任务上的表现出色。但是,人们对GNNS的工作方式和原因的理解不足,尤其是对于节点表示学习。本文旨在提供一个理论框架来理解GNN,特别是光谱图卷积网络和图形注意网络,从图形信号denoising的角度来看。我们的框架表明,GNNS隐式求解图形信号降解问题:频谱图卷积可作为变形的节点特征,而图表的关注量则可以作为降级边缘的重量。我们还表明,线性自我发场机制能够与最先进的图形注意方法竞争。我们的理论结果进一步导致了两个新模型GSDN-F和GSDN-EF,它们适用于具有嘈杂的节点特征和/或嘈杂边缘的图形。我们通过基准数据集的实验来验证我们的理论发现以及新模型的有效性。源代码可在\ url {https://github.com/fuguoji/gsdn}中获得。
Graph neural networks (GNNs) have attracted much attention because of their excellent performance on tasks such as node classification. However, there is inadequate understanding on how and why GNNs work, especially for node representation learning. This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives. Our framework shows that GNNs are implicitly solving graph signal denoising problems: spectral graph convolutions work as denoising node features, while graph attentions work as denoising edge weights. We also show that a linear self-attention mechanism is able to compete with the state-of-the-art graph attention methods. Our theoretical results further lead to two new models, GSDN-F and GSDN-EF, which work effectively for graphs with noisy node features and/or noisy edges. We validate our theoretical findings and also the effectiveness of our new models by experiments on benchmark datasets. The source code is available at \url{https://github.com/fuguoji/GSDN}.