论文标题

用户级差异隐私抗属性的推理攻击语音情感识别的攻击

User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning

论文作者

Feng, Tiantian, Peri, Raghuveer, Narayanan, Shrikanth

论文摘要

许多现有的隐私增强语音情感识别(SER)框架的重点是通过集中的机器学习设置中的对抗培训来扰动原始语音数据。但是,由于对手仍然可以访问扰动数据,因此本隐私保护方案可能会失败。近年来,分布式学习算法,尤其是联邦学习(FL),已获得了普及以保护机器学习应用中的隐私。尽管FL通过将数据保存在本地设备上来提供良好的直觉来保护隐私,但先前的工作表明,对于使用FL培训的SER系统,可以实现隐私攻击(例如属性推理攻击)。在这项工作中,我们建议评估用户级差异隐私(UDP),以减轻佛罗里达州SER系统的隐私泄漏。 UDP使用隐私参数$ε$和$δ$提供了理论隐私保证。我们的结果表明,UDP可以有效地减少属性信息泄漏,同时使用对手访问一个模型更新,以保持SER系统的实用性。但是,当FL系统泄漏更多模型更新到对手时,UDP的功效会受到影响。我们将代码公开可用,以在https://github.com/usc-sail/fed-ser-leakage中复制结果。

Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the original speech data through adversarial training within a centralized machine learning setup. However, this privacy protection scheme can fail since the adversary can still access the perturbed data. In recent years, distributed learning algorithms, especially federated learning (FL), have gained popularity to protect privacy in machine learning applications. While FL provides good intuition to safeguard privacy by keeping the data on local devices, prior work has shown that privacy attacks, such as attribute inference attacks, are achievable for SER systems trained using FL. In this work, we propose to evaluate the user-level differential privacy (UDP) in mitigating the privacy leaks of the SER system in FL. UDP provides theoretical privacy guarantees with privacy parameters $ε$ and $δ$. Our results show that the UDP can effectively decrease attribute information leakage while keeping the utility of the SER system with the adversary accessing one model update. However, the efficacy of the UDP suffers when the FL system leaks more model updates to the adversary. We make the code publicly available to reproduce the results in https://github.com/usc-sail/fed-ser-leakage.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源