论文标题
垂直逻辑回归中的基于残留的标签保护机制
Residue-based Label Protection Mechanisms in Vertical Logistic Regression
论文作者
论文摘要
联合学习(FL)使分布式参与者能够协作学习全球模型,而无需彼此揭示其私人数据。最近,垂直FL,参与者拥有相同的样本但具有不同功能的垂直FL,人们受到了越来越多的关注。本文首先提出了一种标签推理攻击方法,以研究垂直逻辑回归模型的潜在隐私泄漏。具体来说,我们发现攻击者可以利用残基变量,这些变量是通过求解由本地数据集和接收到的解密梯度构建的线性方程系统来计算的,以推断私有标签。为了解决这个问题,我们提出了三种保护机制,例如加性噪声机制,乘法噪声机制和混合机制,这些机制利用了当地的差异隐私和同构加密技术,以防止攻击并改善垂直逻辑回归的稳健性。模型。实验结果表明,添加噪声机制和乘法噪声机制都可以实现有效的标签保护,而模型测试精度仅略有下降,此外,混合机制可以实现标签保护,而无需任何测试精度降解,这证明了我们保护技术的有效性和效率
Federated learning (FL) enables distributed participants to collaboratively learn a global model without revealing their private data to each other. Recently, vertical FL, where the participants hold the same set of samples but with different features, has received increased attention. This paper first presents one label inference attack method to investigate the potential privacy leakages of the vertical logistic regression model. Specifically, we discover that the attacker can utilize the residue variables, which are calculated by solving the system of linear equations constructed by local dataset and the received decrypted gradients, to infer the privately owned labels. To deal with this, we then propose three protection mechanisms, e.g., additive noise mechanism, multiplicative noise mechanism, and hybrid mechanism which leverages local differential privacy and homomorphic encryption techniques, to prevent the attack and improve the robustness of the vertical logistic regression. model. Experimental results show that both the additive noise mechanism and the multiplicative noise mechanism can achieve efficient label protection with only a slight drop in model testing accuracy, furthermore, the hybrid mechanism can achieve label protection without any testing accuracy degradation, which demonstrates the effectiveness and efficiency of our protection techniques