论文标题

线性收敛分散化优化,并压缩

Linear Convergent Decentralized Optimization with Compression

论文作者

Liu, Xiaorui, Li, Yao, Wang, Rongrong, Tang, Jiliang, Yan, Ming

论文摘要

通信压缩已成为加快分布式优化的关键策略。但是,现有的分散算法具有压缩性,主要集中于压缩DGD型算法。在收敛速率,稳定性和处理异质数据的能力方面,它们不令人满意。本文由原始偶算法促进,在\下划线{ea} r Convergent \ Upsine {D}以压缩为铅,铅的第一个\下划线{l} {ea} r Convergent \ unesectline {d}。我们的理论描述了不精确原始和双重更新以及压缩误差的耦合动力学,我们提供了在这种情况下绑定的第一个共识误差,而不假设有界梯度。关于凸问题的实验验证了我们的理论分析,对深神经网的实证研究表明,铅适用于非凸问题。

Communication compression has become a key strategy to speed up distributed optimization. However, existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms. They are unsatisfactory in terms of convergence rate, stability, and the capability to handle heterogeneous data. Motivated by primal-dual algorithms, this paper proposes the first \underline{L}in\underline{EA}r convergent \underline{D}ecentralized algorithm with compression, LEAD. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error, and we provide the first consensus error bound in such settings without assuming bounded gradients. Experiments on convex problems validate our theoretical analysis, and empirical study on deep neural nets shows that LEAD is applicable to non-convex problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源