论文标题
通过灵活代表多模块图神经网络进行更好的概括
Towards Better Generalization with Flexible Representation of Multi-Module Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)已成为旨在对图形结构数据进行学习和推断的引人注目的模型。但是,几乎没有做过了解GNN的基本局限性以扩展到较大的图表并推广到分布(OOD)输入的基本局限性。在本文中,我们使用随机图生成器系统地研究图形大小和结构属性如何影响GNN的预测性能。我们提供的具体证据表明,平均节点度是确定GNN是否可以推广到看不见的图的关键特征,并且在处理多模态分布的图表时,使用多个节点更新功能可以改善GNN的概括性能。因此,我们提出了一个多模块GNN框架,该框架允许网络通过将单个规范的非线性转换推广到聚合输入上,从而灵活地适应新图。我们的结果表明,多模块GNNS在各种结构特征方向上改善了各种推理任务的OOD概括。
Graph neural networks (GNNs) have become compelling models designed to perform learning and inference on graph-structured data. However, little work has been done to understand the fundamental limitations of GNNs for scaling to larger graphs and generalizing to out-of-distribution (OOD) inputs. In this paper, we use a random graph generator to systematically investigate how the graph size and structural properties affect the predictive performance of GNNs. We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs, and that the use of multiple node update functions can improve the generalization performance of GNNs when dealing with graphs of multimodal degree distributions. Accordingly, we propose a multi-module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs. Our results show that the multi-module GNNs improve the OOD generalization on a variety of inference tasks in the direction of diverse structural features.