论文标题

通过正规化的会员推理中维度的祝福

A Blessing of Dimensionality in Membership Inference through Regularization

论文作者

Tan, Jasper, LeJeune, Daniel, Mason, Blake, Javadi, Hamid, Baraniuk, Richard G.

论文摘要

过度参数化是隐私责任吗?在这项工作中,我们研究了参数数量对分类器对会员推理攻击的脆弱性的影响。我们首先说明模型的参数数量如何引起隐私性 - 纯正权衡:增加参数的数量通常以较低的隐私为代价来提高概括性能。但是,值得注意的是,我们表明,如果结合适当的正规化,增加模型的参数数量实际上可以同时提高其隐私和性能,从而消除了隐私性 - 唯一权衡权衡。从理论上讲,我们在双层特征集合设置中证明了逻辑回归的这种奇怪现象。根据我们的理论探索,我们开发了一种新颖的遗留分析工具,以精确表征线性分类器对最佳成员推理攻击的脆弱性。我们从经验上展示了这种“维度的祝福”,用于使用早期停止作为正规器的各种任务,对神经网络进行了“维度的祝福”。

Is overparameterization a privacy liability? In this work, we study the effect that the number of parameters has on a classifier's vulnerability to membership inference attacks. We first demonstrate how the number of parameters of a model can induce a privacy--utility trade-off: increasing the number of parameters generally improves generalization performance at the expense of lower privacy. However, remarkably, we then show that if coupled with proper regularization, increasing the number of parameters of a model can actually simultaneously increase both its privacy and performance, thereby eliminating the privacy--utility trade-off. Theoretically, we demonstrate this curious phenomenon for logistic regression with ridge regularization in a bi-level feature ensemble setting. Pursuant to our theoretical exploration, we develop a novel leave-one-out analysis tool to precisely characterize the vulnerability of a linear classifier to the optimal membership inference attack. We empirically exhibit this "blessing of dimensionality" for neural networks on a variety of tasks using early stopping as the regularizer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源