论文标题

LDP喂养:与当地差异隐私的联合学习

LDP-Fed: Federated Learning with Local Differential Privacy

论文作者

Truex, Stacey, Liu, Ling, Chow, Ka-Ho, Gursoy, Mehmet Emre, Wei, Wenqi

论文摘要

本文介绍了使用当地差异隐私(LDP)的正式隐私保证的新型联合学习系统的新自由贸易商生喂养。现有的LDP协议主要是为了确保收集单个数值或分类值的数据隐私,例如Web访问日志中的单击计数。但是,在联邦学习模型中,参数更新是从每个参与者迭代收集的,由高维,连续值组成,具有高精度(小数点后的10s数字),使现有的LDP协议不适用。为了应对自利汇地期的挑战,我们设计并开发了两种新颖的方法。首先,LDP-FED的LDP模块为在多个单个参与者的私人数据集的大规模神经网络的联合培训中重复收集模型培训参数提供了形式的差异隐私保证。其次,LDP-FED实施了一套选择和过滤技术,用于使用参数服务器扰动和共享选择参数更新。我们通过在公共数据培训深层神经网络中验证了通过凝结的南股自然党协议来验证我们的系统。我们将此版本的LDP喂养的,创造的CLDP喂养与模型准确性,隐私保护和系统功能相比,其他最先进的方法进行了比较。

This paper presents LDP-Fed, a novel federated learning system with a formal privacy guarantee using local differential privacy (LDP). Existing LDP protocols are developed primarily to ensure data privacy in the collection of single numerical or categorical values, such as click count in Web access logs. However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable. To address this challenge in LDP-Fed, we design and develop two novel approaches. First, LDP-Fed's LDP Module provides a formal differential privacy guarantee for the repeated collection of model training parameters in the federated training of large-scale neural networks over multiple individual participants' private datasets. Second, LDP-Fed implements a suite of selection and filtering techniques for perturbing and sharing select parameter updates with the parameter server. We validate our system deployed with a condensed LDP protocol in training deep neural networks on public data. We compare this version of LDP-Fed, coined CLDP-Fed, with other state-of-the-art approaches with respect to model accuracy, privacy preservation, and system capabilities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源