论文标题

通过适应多代理分布式ADMM框架来提高收敛速度

Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework

论文作者

Rahman, Towfiq, Qu, Zhihua, Namerikawa, Toru

论文摘要

在本文中,研究了乘数的交替方向方法(ADMM),以解决网络多代理系统中的分布式优化问题。特别是,一种新的自适应增益ADMM算法以封闭形式和标准凸属性得出,以极大地加快基于ADMM的分布式优化的融合。使用Lyapunov的直接方法,提出的解决方案将控制收益嵌入了代理之间的加权网络矩阵中,并将这些权重用作增强拉格朗日的自适应罚款。结果表明,拟议的闭环增益适应方案显着改善了基础ADMM优化的收敛时间。提供了收敛分析并包括模拟结果以证明所提出的方案的有效性。

In this paper, the alternating direction method of multipliers (ADMM) is investigated for distributed optimization problems in a networked multi-agent system. In particular, a new adaptive-gain ADMM algorithm is derived in a closed form and under the standard convex property in order to greatly speed up convergence of ADMM-based distributed optimization. Using Lyapunov direct approach, the proposed solution embeds control gains into weighted network matrix among the agents and uses those weights as adaptive penalty gains in the augmented Lagrangian. It is shown that the proposed closed loop gain adaptation scheme significantly improves the convergence time of underlying ADMM optimization. Convergence analysis is provided and simulation results are included to demonstrate the effectiveness of the proposed scheme.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源