论文标题

为模型隐私洗钱

Information Laundering for Model Privacy

论文作者

Wang, Xinran, Xiang, Yu, Gao, Jun, Ding, Jie

论文摘要

在这项工作中,我们提出了信息洗涤,这是增强模型隐私的新颖框架。与涉及保护原始数据信息的数据隐私不同,模型隐私旨在保护已学习的模型,该模型将用于公共使用。可以从一般学习方法获得私有模型,其部署意味着它将返回给定输入查询的确定性或随机响应。一个信息模型由概率组件组成,这些概率组件故意操纵对模型查询的预期输入和输出,因此模型的对抗性采集的可能性较小。在拟议的框架下,我们开发了一个信息理论原则,以量化模型效用和隐私泄漏之间的基本权衡并得出最佳设计。

In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries to the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源