论文标题

分布式自适应牛顿方法与全球超级线性融合

Distributed Adaptive Newton Methods with Global Superlinear Convergence

论文作者

Zhang, Jiaqi, You, Keyou, Başar, Tamer

论文摘要

本文考虑了分布式优化问题,其中点对点网络的每个节点通过与其相邻节点进行通信,可以最大程度地减少目标函数的有限总和。与现有文献形成鲜明对比的是,最快的分布式算法与全球线性或局部超线性速率融合在一起,我们提出了具有全球二次收敛速率的分布式自适应牛顿(DAN)算法。我们的关键想法在于设计有限的设定传感方法,其中Polyak的自适应步骤。此外,我们引入了低排名矩阵近似(LA)技术来压缩Hessian矩阵的创新,因此每个节点只需要传输尺寸$ \ Mathcal {o}(o}(p)$的消息(其中$ p $是每个迭代的决策矢量的维度),这与第一个阶段相同。然而,由此产生的Dan-La以全球超级线性速率收敛到最佳解决方案。进行有关逻辑回归问题的数值实验是为了验证其优势在现有方法方面的优势。

This paper considers the distributed optimization problem where each node of a peer-to-peer network minimizes a finite sum of objective functions by communicating with its neighboring nodes. In sharp contrast to the existing literature where the fastest distributed algorithms converge either with a global linear or a local superlinear rate, we propose a distributed adaptive Newton (DAN) algorithm with a global quadratic convergence rate. Our key idea lies in the design of a finite-time set-consensus method with Polyak's adaptive stepsize. Moreover, we introduce a low-rank matrix approximation (LA) technique to compress the innovation of Hessian matrix so that each node only needs to transmit message of dimension $\mathcal{O}(p)$ (where $p$ is the dimension of decision vectors) per iteration, which is essentially the same as that of first-order methods. Nevertheless, the resulting DAN-LA converges to an optimal solution with a global superlinear rate. Numerical experiments on logistic regression problems are conducted to validate their advantages over existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源