论文标题

带有分布式在线学习的宽大图形神经网络

Wide and Deep Graph Neural Networks with Distributed Online Learning

论文作者

Gao, Zhan, Gama, Fernando, Ribeiro, Alejandro

论文摘要

图形神经网络(GNNS)从具有自然分布的体系结构的网络数据中学习表示形式,从而使他们适合分散学习的候选人。通常,由于链接故障或拓扑变化,该分散图支持随时间变化。这些变化在训练GNN的图形与测试的图形之间造成了不匹配。在线学习可以在测试时间进行重新培训,以克服此问题。但是,大多数在线算法都是集中式的,并且在凸问题上(GNNS很少导致)。本文提出了宽大的GNN(WD-GNN),这是一种新颖的架构,可以通过分布式的在线学习机制轻松更新。 WD-GNN包括两个组件:宽部分是一组线性图过滤器,而深部为GNN。在培训时,联合体系结构从数据中学习了非线性表示。在测试时,深部(非线性)保持不变,而宽部分则在网上进行重新训练,从而导致了凸问题。我们得出了这种在线再培训程序的融合保证,并进一步提出了分散的替代方案。在机器人群体控制群的实验中,羊群佐证的理论,并显示了拟议的架构对分布式在线学习的潜力。

Graph neural networks (GNNs) learn representations from network data with naturally distributed architectures, rendering them well-suited candidates for decentralized learning. Oftentimes, this decentralized graph support changes with time due to link failures or topology variations. These changes create a mismatch between the graphs on which GNNs were trained and the ones on which they are tested. Online learning can be used to retrain GNNs at testing time, overcoming this issue. However, most online algorithms are centralized and work on convex problems (which GNNs rarely lead to). This paper proposes the Wide and Deep GNN (WD-GNN), a novel architecture that can be easily updated with distributed online learning mechanisms. The WD-GNN comprises two components: the wide part is a bank of linear graph filters and the deep part is a GNN. At training time, the joint architecture learns a nonlinear representation from data. At testing time, the deep part (nonlinear) is left unchanged, while the wide part is retrained online, leading to a convex problem. We derive convergence guarantees for this online retraining procedure and further propose a decentralized alternative. Experiments on the robot swarm control for flocking corroborate theory and show potential of the proposed architecture for distributed online learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源