论文标题
特征空间劫持了针对差异私人分裂学习的攻击
Feature Space Hijacking Attacks against Differentially Private Split Learning
论文作者
论文摘要
分裂学习和差异隐私是具有越来越多的潜力的技术,可以帮助符合分布式数据集的符合隐私的高级分析。针对分裂学习的攻击是一个重要的评估工具,最近受到了越来越多的研究关注。这项工作的贡献是使用客户端的现成的DP Optimizer,将最新功能空间劫持攻击(FSHA)应用于通过差异隐私(DP)增强的分裂神经网络的学习过程。 FSHA攻击在任意设置DP Epsilon级别时获得了客户的私人数据重建。我们还试验了降低维度的潜在攻击风险,并表明它在某种程度上可能有助于。我们讨论了在这种情况下差异隐私不是有效保护的原因,并提及潜在的其他降低风险方法。
Split learning and differential privacy are technologies with growing potential to help with privacy-compliant advanced analytics on distributed datasets. Attacks against split learning are an important evaluation tool and have been receiving increased research attention recently. This work's contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains client's private data reconstruction with low error rates at arbitrarily set DP epsilon levels. We also experiment with dimensionality reduction as a potential attack risk mitigation and show that it might help to some extent. We discuss the reasons why differential privacy is not an effective protection in this setting and mention potential other risk mitigation methods.