论文标题
使用隐私保存协作学习的私人数据集生成
Private Dataset Generation Using Privacy Preserving Collaborative Learning
论文作者
论文摘要
随着在许多应用中越来越多地使用深度学习算法,就出现了与隐私和对抗性攻击有关的新研究问题。但是,深度学习算法的改进需要越来越多的数据在研究社区中共享。诸如联合学习,差异隐私,添加秘密共享等方法论提供了一种方法,可以在边缘上训练机器学习模型,而无需将数据从边缘移动。但是,它在计算中非常密集,容易发作。因此,这项工作介绍了一个隐私,从而在Edge上为训练机器学习模型提供了FedCollabnn框架,这在计算上是有效且可靠的对抗性攻击。使用MNIST数据集的仿真结果表示框架的有效性。
With increasing usage of deep learning algorithms in many application, new research questions related to privacy and adversarial attacks are emerging. However, the deep learning algorithm improvement needs more and more data to be shared within research community. Methodologies like federated learning, differential privacy, additive secret sharing provides a way to train machine learning models on edge without moving the data from the edge. However, it is very computationally intensive and prone to adversarial attacks. Therefore, this work introduces a privacy preserving FedCollabNN framework for training machine learning models at edge, which is computationally efficient and robust against adversarial attacks. The simulation results using MNIST dataset indicates the effectiveness of the framework.