论文标题
关于中央服务器免费分布式培训的量化并行重新启动SGD的收敛
On the Convergence of Quantized Parallel Restarted SGD for Central Server Free Distributed Training
论文作者
论文摘要
在分布式培训的背景下,交流是一个关键阶段。由于参数服务器(PS)经常遇到网络拥塞,因此最近的研究发现,在通信效率方面,没有集中式服务器的训练范式优于基于服务器的传统范式。但是,随着模型尺寸的增长,这些无服务器范式也面临着大量的通信开销,这严重恶化了分布式培训的性能。在本文中,我们通过提出量化的平行重新启动随机梯度下降(QPRSGD),将两个无服务器范式的沟通效率(即环形全范围(RAR)和八卦)进行,这是一种算法,该算法允许在全局同步之前与量子交流,从而允许多个本地SGD更新,以降低量子的交流。我们根据同步模式和网络拓扑建立累积错误的界限,这对于确保收敛属性至关重要。在两个聚合范式下,该算法就本地更新的数量以及工人数量实现了线性加速属性。值得注意的是,拟议的算法达到了八卦范式下的收敛速率$ o(1/\ sqrt {nk^2m})$,并且优于所有现有的压缩方法,其中$ n $是全球同步的时代,而$ k $是$ k $的本地更新,而$ k $是$ m $ $ m $是Nodes的数字。一项对各种机器学习模型的实证研究表明,与平行SGD相比,在低带宽网络中,交流开销降低了90 \%,并且在低带宽网络中,收敛速度最多可提高18.6倍。
Communication is a crucial phase in the context of distributed training. Because parameter server (PS) frequently experiences network congestion, recent studies have found that training paradigms without a centralized server outperform the traditional server-based paradigms in terms of communication efficiency. However, with the increasing growth of model sizes, these server-free paradigms are also confronted with substantial communication overhead that seriously deteriorates the performance of distributed training. In this paper, we focus on communication efficiency of two serverless paradigms, i.e., Ring All-Reduce (RAR) and gossip, by proposing the Quantized Parallel Restarted Stochastic Gradient Descent (QPRSGD), an algorithm that allows multiple local SGD updates before a global synchronization, in synergy with the quantization to significantly reduce the communication overhead. We establish the bound of accumulative errors according to the synchronization mode and the network topology, which is essential to ensure the convergence property. Under both aggregation paradigms, the algorithm achieves the linear speedup property with respect to the number of local updates as well as the number of workers. Remarkably, the proposed algorithm achieves a convergence rate $O(1/\sqrt{NK^2M})$ under the gossip paradigm and outperforms all existing compression methods, where $N$ is the times of global synchronizations, and $K$ is the number of local updates, while $M$ is the number of nodes. An empirical study on various machine learning models demonstrates that the communication overhead is reduced by 90\%, and the convergence speed is boosted by up to 18.6 times, in a low bandwidth network, in comparison with Parallel SGD.