论文标题

在联邦学习中利用针对基于GAN的特征推理攻击的防御措施

Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning

论文作者

Luo, Xinjian, Zhang, Xianglong

论文摘要

联合学习(FL)是一个分散的模型培训框架,旨在合并孤立的数据岛,同时保持数据隐私。但是,最近的研究表明,基于生成的对抗网络(GAN)攻击可以在FL中用于学习私人数据集的分布并重建可识别的图像。在本文中,我们利用防御措施在FL中的基于GAN的攻击,并提出了一个反GAN的框架,以防止攻击者了解受害者数据的真实分布。反胆的核心思想是操纵私人训练图像的视觉特征,以使它们与攻击者恢复的人眼睛无法区分。具体而言,反gan将私有数据集投射到GAN的发电机上,并将生成的假图像与实际图像结合在一起以创建训练数据集,然后将其用于联合模型培训。实验结果表明,反对手有效防止攻击者学习私人图像的分布,同时对联邦模型的准确性造成最小的伤害。

Federated learning (FL) is a decentralized model training framework that aims to merge isolated data islands while maintaining data privacy. However, recent studies have revealed that Generative Adversarial Network (GAN) based attacks can be employed in FL to learn the distribution of private datasets and reconstruct recognizable images. In this paper, we exploit defenses against GAN-based attacks in FL and propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data. The core idea of Anti-GAN is to manipulate the visual features of private training images to make them indistinguishable to human eyes even restored by attackers. Specifically, Anti-GAN projects the private dataset onto a GAN's generator and combines the generated fake images with the actual images to create the training dataset, which is then used for federated model training. The experimental results demonstrate that Anti-GAN is effective in preventing attackers from learning the distribution of private images while causing minimal harm to the accuracy of the federated model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源