论文标题

网络多代理增强学习中本地化政策迭代的全球融合

Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning

论文作者

Zhang, Yizhou, Qu, Guannan, Xu, Pan, Lin, Yiheng, Chen, Zaiwei, Wierman, Adam

论文摘要

我们研究了代理通过给定网络相互作用的多机构增强学习(MARL)问题。代理商的目标是合作地最大化其熵登记的长期奖励的平均值。为了克服维度的诅咒并减少交流的诅咒,我们提出了一种本地化政策迭代(LPI)算法,该算法可证明仅使用本地信息来学习近乎全球优势的策略。特别是,我们表明,尽管将每个代理商的注意力限制在其$κ$ -HOP社区中,但代理商还是能够以最佳差距学习政策,该政策以$κ$多样化。此外,我们还展示了LPI与全球最佳策略的有限样本收敛性,该政策明确捕获了选择$κ$的最佳和计算复杂性之间的权衡。数值模拟证明了LPI的有效性。

We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its $κ$-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in $κ$. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing $κ$. Numerical simulations demonstrate the effectiveness of LPI.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源