论文标题
ADAP DP-FL:具有自适应噪音的差异私人联合学习
Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise
论文作者
论文摘要
联邦学习旨在通过使客户仅披露其本地培训模型来解决孤立的数据岛问题。但是,已经证明,可以通过分析本地模型参数(例如深神经网络模型权重)来推断私人信息。最近,差异隐私已应用于联合学习以保护数据隐私,但是添加的噪声可能会大大降低学习绩效。通常,在以前的工作中,培训参数被平等地剪切,并均匀地添加了噪声。根本不考虑训练参数的异质性和收敛性。在本文中,我们提出了一种具有自适应噪声(ADAP DP-FL)联合学习的差异私人方案。具体而言,由于梯度异质性,我们为不同的客户和不同的回合进行自适应梯度剪辑;由于梯度收敛,我们相应地添加了噪声。对现实世界数据集的广泛实验表明,我们的ADAP DP-FL胜过以前的方法。
Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models. However, it was demonstrated that private information could still be inferred by analyzing local model parameters, such as deep neural network model weights. Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much. Typically, in previous work, training parameters were clipped equally and noises were added uniformly. The heterogeneity and convergence of training parameters were simply not considered. In this paper, we propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL). Specifically, due to the gradient heterogeneity, we conduct adaptive gradient clipping for different clients and different rounds; due to the gradient convergence, we add decreasing noises accordingly. Extensive experiments on real-world datasets demonstrate that our Adap DP-FL outperforms previous methods significantly.