论文标题
PMGT-VR:一个分散的近端梯度算法框架,降低了方差
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
论文作者
论文摘要
本文考虑了分散的复合优化问题。我们提出了一个新型的分散差异 - 减少近端梯度算法框架,称为PMGT-VR,该框架基于多种技术的组合,包括多种发声,梯度跟踪和降低方差。提出的框架依赖于集中算法的模仿,我们证明,此框架下的算法达到的收敛速率与其集中式同行相似。我们还描述并分析了两种代表性算法PMGT-SAGA和PMGT-LSVRG,并将它们与现有的最新近端算法进行比较。据我们所知,PMGT-VR是第一个线性收敛分散的随机算法,可以解决分散的复合优化问题。提供数值实验以证明所提出的算法的有效性。
This paper considers the decentralized composite optimization problem. We propose a novel decentralized variance-reduction proximal-gradient algorithmic framework, called PMGT-VR, which is based on a combination of several techniques including multi-consensus, gradient tracking, and variance reduction. The proposed framework relies on an imitation of centralized algorithms and we demonstrate that algorithms under this framework achieve convergence rates similar to that of their centralized counterparts. We also describe and analyze two representative algorithms, PMGT-SAGA and PMGT-LSVRG, and compare them to existing state-of-the-art proximal algorithms. To the best of our knowledge, PMGT-VR is the first linearly convergent decentralized stochastic algorithm that can solve decentralized composite optimization problems. Numerical experiments are provided to demonstrate the effectiveness of the proposed algorithms.