论文标题
Fedlite:一种可扩展的方法,用于对资源受限客户的联合学习
FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients
论文作者
论文摘要
在经典联合学习中,客户通过将其私人数据上的基础模型的本地更新传达给协调服务器,从而为整体培训做出了贡献。但是,当资源受限的客户集体培训大型机器学习模型时,更新和通信整个模型变得非常昂贵。 Split学习提供了一种自然解决方案,在这种情况下,仅对客户存储和培训模型的一小部分,而其余大部分模型仅在服务器上停留。但是,分裂学习中采用的模型分区引入了大量的沟通成本。本文通过使用新颖的聚类方案以及梯度校正方法来压缩额外的通信来解决此问题。对图像和文本基准的广泛经验评估表明,所提出的方法最高可达到$ 490 \ times $ $ nocument $,而准确性下降,并实现了理想的性能与通信折衷。
In classical federated learning, the clients contribute to the overall training by communicating local updates for the underlying model on their private data to a coordinating server. However, updating and communicating the entire model becomes prohibitively expensive when resource-constrained clients collectively aim to train a large machine learning model. Split learning provides a natural solution in such a setting, where only a small part of the model is stored and trained on clients while the remaining large part of the model only stays at the servers. However, the model partitioning employed in split learning introduces a significant amount of communication cost. This paper addresses this issue by compressing the additional communication using a novel clustering scheme accompanied by a gradient correction method. Extensive empirical evaluations on image and text benchmarks show that the proposed method can achieve up to $490\times$ communication cost reduction with minimal drop in accuracy, and enables a desirable performance vs. communication trade-off.