论文标题
石榴石:可靠且可扩展的图形神经网络的减少拓扑学习
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)越来越多地部署在涉及对非欧盟数据学习的各种应用中。但是,最近的研究表明,GNN容易受到对抗性攻击的影响。尽管有几种防御方法可以通过消除对抗性组件来提高GNN鲁棒性,但它们也可能会损害有助于GNN训练的基本干净的图形结构。此外,由于其高计算复杂性和内存使用情况,这些防御模型中很少有人能扩展到大图。在本文中,我们提出了一种可扩展的光谱方法,以增强GNN模型的对抗性鲁棒性。石榴石首先利用加权光谱嵌入来构建基本图,这不仅对对抗性攻击具有抵抗力,而且还包含用于GNN训练的关键(清洁)图结构。接下来,石榴石通过基于概率图形模型修剪其他非临界边缘来进一步完善基本图。石榴石已在各种数据集上进行了评估,其中包括具有数百万个节点的大图。我们广泛的实验结果表明,石榴石可以在最先进的GNN(防御)模型上提高对抗性准确性和运行时的速度,分别高达13.27%和14.7倍。
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data. However, recent studies show that GNNs are vulnerable to graph adversarial attacks. Although there are several defense methods to improve GNN robustness by eliminating adversarial components, they may also impair the underlying clean graph structure that contributes to GNN training. In addition, few of those defense models can scale to large graphs due to their high computational complexity and memory usage. In this paper, we propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models. GARNET first leverages weighted spectral embedding to construct a base graph, which is not only resistant to adversarial attacks but also contains critical (clean) graph structure for GNN training. Next, GARNET further refines the base graph by pruning additional uncritical edges based on probabilistic graphical model. GARNET has been evaluated on various datasets, including a large graph with millions of nodes. Our extensive experiment results show that GARNET achieves adversarial accuracy improvement and runtime speedup over state-of-the-art GNN (defense) models by up to 13.27% and 14.7x, respectively.