论文标题

缓解基于差异隐私的联合学习的Sybil攻击

Mitigating Sybil Attacks on Differential Privacy based Federated Learning

论文作者

Jiang, Yupeng, Li, Yong, Zhou, Yipeng, Zheng, Xi

论文摘要

在联合学习中,机器学习和深度学习模型在分布式设备上进行了全球培训。在联合学习的背景下,最先进的隐私技术是用户级别的差异隐私。但是,这种机制容易受到某些特定模型中毒攻击(例如Sybil攻击)的攻击。恶意的对手可以创建多个假客户端或在SYBIL攻击中勾结出折衷的设备,以安装直接模型更新操纵。在利用差异隐私时,很难在针对模型中毒攻击的新型防御中进行新的防御攻击,因为它可以掩盖客户的模型更新,并使用扰动。在这项工作中,我们对基于差异隐私的联合学习体系结构进行了首次Sybil攻击,并显示了它们对模型收敛的影响。我们通过在这些Sybil客户的本地模型更新上操纵不同的差异预算Epsilon反映的不同噪声水平来随机损害某些客户,从而使全球模型收敛率降低甚至导致分歧。我们将攻击应用于最近的两种汇总防御机制,称为KRUM和修剪平均值。我们对MNIST和CIFAR-10数据集的评估结果表明,我们的攻击有效地降低了全局模型的收敛性。然后,我们提出了一种方法,以继续监视每个回合中所有参与者的平均损失,以根据每个客户报告的预测成本来融合异常检测并捍卫我们的SYBIL攻击。我们的实证研究表明,我们的防御方法有效地减轻了Sybil攻击对模型收敛的影响。

In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget epsilon of differential privacy on the local model updates of these Sybil clients such that the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the prediction cost reported from each client. Our empirical study demonstrates that our defense approach effectively mitigates the impact of our Sybil attacks on model convergence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源