论文标题
通过安全的最大平均差异,保护隐私的转移学习
Privacy-preserving Transfer Learning via Secure Maximum Mean Discrepancy
论文作者
论文摘要
机器学习算法的成功通常依赖大量的高质量数据来训练良好的模型。但是,数据是一个宝贵的资源,并且始终由现实中的不同政党持有。解决此类数据隔离问题的有效解决方案是采用联合学习,这使多方可以协作训练模型。在本文中,我们提出了基于同型加密的广泛使用最大平均差异(SMMD)的安全版本,以在数据联合设置下实现有效的知识传输,而不会损害数据隐私。提出的SMMD能够避免在对齐源和目标数据分布时传输学习中的潜在信息泄漏。结果,源域和目标域都可以充分利用其数据来构建更多可扩展模型。实验结果表明,我们提出的SMMD是安全有效的。
The success of machine learning algorithms often relies on a large amount of high-quality data to train well-performed models. However, data is a valuable resource and are always held by different parties in reality. An effective solution to such a data isolation problem is to employ federated learning, which allows multiple parties to collaboratively train a model. In this paper, we propose a Secure version of the widely used Maximum Mean Discrepancy (SMMD) based on homomorphic encryption to enable effective knowledge transfer under the data federation setting without compromising the data privacy. The proposed SMMD is able to avoid the potential information leakage in transfer learning when aligning the source and target data distribution. As a result, both the source domain and target domain can fully utilize their data to build more scalable models. Experimental results demonstrate that our proposed SMMD is secure and effective.