论文标题
LDP-FL:与当地差异隐私的联邦学习中实用的私人聚合
LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy
论文作者
论文摘要
关于敏感用户数据的火车机学习模型已增加了许多领域的隐私问题。联合学习是一种流行的隐私保护方法,它可以收集本地梯度信息而不是实际数据。实现严格隐私保证的一种方法是将当地差异隐私应用于联邦学习。但是,由于三个问题,以前的作品没有提供实用的解决方案。首先,嘈杂的数据接近其原始价值,概率很高,从而增加了信息暴露的风险。其次,引入了较大的差异,以估计的平均值,导致准确性差。最后,由于深度学习模型中权重的高维度,隐私预算爆炸了。在本文中,我们提出了一种新颖的设计,以实现联邦学习的当地差异隐私机制,以解决上述问题。它能够使数据与其原始值更加不同并引入较低的方差。此外,提出的机制通过拆分和洗牌模型更新绕过了维度的诅咒。对三个常用数据集(MNIST,时尚狂热者和CIFAR-10)的一系列经验评估表明,我们的解决方案不仅可以实现出色的深度学习绩效,而且可以同时提供强大的隐私保证。
Train machine learning models on sensitive user data has raised increasing privacy concerns in many areas. Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data. One way to achieve a strict privacy guarantee is to apply local differential privacy into federated learning. However, previous works do not give a practical solution due to three issues. First, the noisy data is close to its original value with high probability, increasing the risk of information exposure. Second, a large variance is introduced to the estimated average, causing poor accuracy. Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models. In this paper, we proposed a novel design of local differential privacy mechanism for federated learning to address the abovementioned issues. It is capable of making the data more distinct from its original value and introducing lower variance. Moreover, the proposed mechanism bypasses the curse of dimensionality by splitting and shuffling model updates. A series of empirical evaluations on three commonly used datasets, MNIST, Fashion-MNIST and CIFAR-10, demonstrate that our solution can not only achieve superior deep learning performance but also provide a strong privacy guarantee at the same time.