论文标题

分布式复合优化的乘数的统一近似方法

A Unifying Approximate Method of Multipliers for Distributed Composite Optimization

论文作者

Wu, Xuyang, Lu, Jie

论文摘要

本文研究了在无向网络上求解凸复合优化的,在该网络上,每个节点都具有光滑的组件函数和非平滑函数,以最大程度地减少整个网络中所有组件函数的总和。为了解决这样的问题,开发了乘数(AMM)的一般近似方法,该方法试图通过具有许多选项的替代功能来近似乘数方法。然后,我们以各种方式设计了可能不可分割的,时变的替代功能,从而导致对AMM的分布式实现。我们证明,AMM概括了十种以上的最先进的分布式优化算法,其替代功能的某些特定设计导致文献各种新算法。此外,我们表明AMM能够达到$ O(1/k)$融合到最佳性的速率,并且当问题受到局部限制强烈凸出且平稳时,收敛速率变为线性。这种收敛速率为许多先前的方法提供了新的或更强的收敛结果,这些方法可以看作是AMM的专业。

This paper investigates solving convex composite optimization on an undirected network, where each node, privately endowed with a smooth component function and a nonsmooth one, is required to minimize the sum of all the component functions throughout the network. To address such a problem, a general Approximate Method of Multipliers (AMM) is developed, which attempts to approximate the Method of Multipliers by virtue of a surrogate function with numerous options. We then design the possibly nonseparable, time-varying surrogate function in various ways, leading to different distributed realizations of AMM. We demonstrate that AMM generalizes more than ten state-of-the-art distributed optimization algorithms, and certain specific designs of its surrogate function result in a variety of new algorithms to the literature. Furthermore, we show that AMM is able to achieve an $O(1/k)$ rate of convergence to optimality, and the convergence rate becomes linear when the problem is locally restricted strongly convex and smooth. Such convergence rates provide new or stronger convergence results to many prior methods that can be viewed as specializations of AMM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源