论文标题
RPN:一个用于有效联邦学习的剩余合并网络
RPN: A Residual Pooling Network for Efficient Federated Learning
论文作者
论文摘要
联合学习是一个分布式的机器学习框架,它使不同的各方能够在保护数据隐私和安全性的同时协作训练模型。由于模型的复杂性,网络不可靠性和连接稳定性,通信成本已成为将联邦学习应用于现实世界应用程序的主要瓶颈。当前的现有策略要么需要为超参数手动设置,要么将原始过程分为多个步骤,这使得很难实现端到端的实现。在本文中,我们提出了一种新型的压缩策略,称为剩余合并网络(RPN)。我们的实验表明,与标准联合学习相比,RPN不仅有效地减少了数据传输,而且还取得了几乎相同的性能。我们的新方法是作为端到端过程的,应轻易将其应用于所有基于CNN的模型培训方案,以提高沟通效率,因此使得在不大部分干预的情况下可以轻松地在现实世界中部署。
Federated learning is a distributed machine learning framework which enables different parties to collaboratively train a model while protecting data privacy and security. Due to model complexity, network unreliability and connection in-stability, communication cost has became a major bottleneck for applying federated learning to real-world applications. Current existing strategies are either need to manual setting for hyperparameters, or break up the original process into multiple steps, which make it hard to realize end-to-end implementation. In this paper, we propose a novel compression strategy called Residual Pooling Network (RPN). Our experiments show that RPN not only reduce data transmission effectively, but also achieve almost the same performance as compared to standard federated learning. Our new approach performs as an end-to-end procedure, which should be readily applied to all CNN-based model training scenarios for improvement of communication efficiency, and hence make it easy to deploy in real-world application without much human intervention.