论文标题
针对推荐系统的会员推理攻击的偏见学习
Debiasing Learning for Membership Inference Attacks Against Recommender Systems
论文作者
论文摘要
学习的推荐系统可能会无意间泄露有关其培训数据的信息,从而导致侵犯隐私。我们调查了推荐系统通过成员推理面临的隐私威胁。在这种攻击中,对手的目的是推断用户的数据是否用于训练目标推荐人。为了实现这一目标,以前的工作已使用阴影推荐人来得出攻击模型的训练数据,然后通过计算用户历史互动和推荐项目之间的差异向量来预测成员资格。最先进的方法面临两个具有挑战性的问题:(1)由于阴影和目标推荐人之间的差距,攻击模型的训练数据偏向,并且(2)推荐人中的隐藏状态不是观察性的,导致对差异向量的估计不准确。为了解决上述局限性,我们提出了针对推荐系统(DL-MIA)框架的成员推理攻击的依据学习,该框架具有四个主要组成部分:(1)差异向量生成器,(2)解散的编码器,(3)权重估计器和(4)攻击模型。为了减轻推荐人之间的差距,设计了基于变异的自动编码器(VAE)的分解编码器,以识别推荐的不变和特定功能。为了减少估计偏差,我们设计了一个权重估计器,为每个差异向量分配了真实级别的得分,以指示估计精度。我们对三个现实世界数据集的一般建议者和顺序推荐人评估了DL-MIA。实验结果表明,DL-MIA有效地减轻了同时减轻培训和估计的偏见,并实现了最先进的攻击性能。
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations. We investigate privacy threats faced by recommender systems through the lens of membership inference. In such attacks, an adversary aims to infer whether a user's data is used to train the target recommender. To achieve this, previous work has used a shadow recommender to derive training data for the attack model, and then predicts the membership by calculating difference vectors between users' historical interactions and recommended items. State-of-the-art methods face two challenging problems: (1) training data for the attack model is biased due to the gap between shadow and target recommenders, and (2) hidden states in recommenders are not observational, resulting in inaccurate estimations of difference vectors. To address the above limitations, we propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components: (1) a difference vector generator, (2) a disentangled encoder, (3) a weight estimator, and (4) an attack model. To mitigate the gap between recommenders, a variational auto-encoder (VAE) based disentangled encoder is devised to identify recommender invariant and specific features. To reduce the estimation bias, we design a weight estimator, assigning a truth-level score for each difference vector to indicate estimation accuracy. We evaluate DL-MIA against both general recommenders and sequential recommenders on three real-world datasets. Experimental results show that DL-MIA effectively alleviates training and estimation biases simultaneously, and achieves state-of-the-art attack performance.