论文标题
图形卷积网络的分布式培训
Distributed Training of Graph Convolutional Networks
论文作者
论文摘要
这项工作的目的是为训练图卷积网络(GCN)开发一个完全分布的算法框架。所提出的方法能够利用输入数据的有意义的关系结构,这些关系由一组通过稀疏网络拓扑通信的代理收集。在制定了集中式GCN训练问题之后,我们首先展示了如何在分布式方案中进行推断,其中基础数据图在不同的代理之间分配。然后,我们提出了一个分布式梯度下降程序来解决GCN训练问题。所得模型沿三行分布计算:在推理,后传播期间和优化期间。在轻度条件下,还建立了与GCN训练问题的固定解决方案的收敛。最后,我们提出了一个优化标准,以设计代理之间的通信拓扑,以便与描述数据关系的图形匹配。广泛的数值结果验证了我们的建议。据我们所知,这是将图形卷积神经网络与分布式优化相结合的第一项工作。
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.