论文标题

联合学习中的客户选择:收敛分析和选择权选择策略

Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

论文作者

Cho, Yae Jee, Wang, Jianyu, Joshi, Gauri

论文摘要

联合学习是一个分布式优化范式,它使大量资源有限的客户节点能够合作训练模型而无需数据共享。几项工作通过计算数据异质性,通信和计算限制以及部分客户参与来分析联合学习的融合。但是,他们假设客户参与度无偏见,其中随机选择客户的数据大小。在本文中,我们介绍了针对有偏见的客户选择策略联合优化的第一个收敛分析,并量化了选择偏差如何影响收敛速度。我们透露,将客户选择偏向较高的本地损失的客户会实现更快的错误融合。使用此见解,我们提出了选择权,这是一个通信和计算有效的客户选择框架,可以灵活地跨越收敛速度和解决方案偏差之间的权衡。我们的实验表明,与基线随机选择相比,选择权的策略最多可快3 $ \ times $,并且测试准确性高10美元。

Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first convergence analysis of federated optimization for biased client selection strategies, and quantify how the selection bias affects convergence speed. We reveal that biasing client selection towards clients with higher local loss achieves faster error convergence. Using this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that can flexibly span the trade-off between convergence speed and solution bias. Our experiments demonstrate that Power-of-Choice strategies converge up to 3 $\times$ faster and give $10$% higher test accuracy than the baseline random selection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源