论文标题
通过机会移动继电器加速异步联合学习收敛
Accelerating Asynchronous Federated Learning Convergence via Opportunistic Mobile Relaying
论文作者
论文摘要
本文介绍了有关移动网络环境中异步联合学习(FL)的研究。大多数FL算法都认为,客户与服务器之间的通信始终可用,但是,在许多现实世界中,情况并非如此。为了解决这个问题,本文探讨了移动性对异步FL的收敛性能的影响。通过利用移动性,该研究表明,客户可以通过另一个用作继电器的客户间接与服务器进行通信,从而创造其他通信机会。这使客户能够更快地上传本地模型更新或接收新鲜的全局模型。我们提出了一种新的FL算法,称为FedMobile,该算法结合了机会主义的继电器,并解决了关键问题,例如何时以及如何中继。我们证明,FedMobile实现了收敛速率$ o(\ frac {1} {\ sqrt {nt}})$,其中$ n $是客户端的数量,$ t $是通信插槽的数量,并且表明,最佳设计涉及在最佳时机上进行最佳的交流时间。该论文还提出了一个扩展程序,该扩展名在继电器中之前考虑数据操作以降低成本并增强隐私。实验结果对合成数据集和两个现实世界数据集进行了验证我们的理论发现。
This paper presents a study on asynchronous Federated Learning (FL) in a mobile network setting. The majority of FL algorithms assume that communication between clients and the server is always available, however, this is not the case in many real-world systems. To address this issue, the paper explores the impact of mobility on the convergence performance of asynchronous FL. By exploiting mobility, the study shows that clients can indirectly communicate with the server through another client serving as a relay, creating additional communication opportunities. This enables clients to upload local model updates sooner or receive fresher global models. We propose a new FL algorithm, called FedMobile, that incorporates opportunistic relaying and addresses key questions such as when and how to relay. We prove that FedMobile achieves a convergence rate $O(\frac{1}{\sqrt{NT}})$, where $N$ is the number of clients and $T$ is the number of communication slots, and show that the optimal design involves an interesting trade-off on the best timing of relaying. The paper also presents an extension that considers data manipulation before relaying to reduce the cost and enhance privacy. Experiment results on a synthetic dataset and two real-world datasets verify our theoretical findings.