论文标题
适用于自适应隐私的强大汇总保存医疗保健中的联合学习
Robust Aggregation for Adaptive Privacy Preserving Federated Learning in Healthcare
论文作者
论文摘要
联邦学习(FL)已启用了从多个数据拥有方协作的培训模型,而无需共享数据。鉴于患者医疗保健数据的隐私法规,医疗保健中基于学习的系统可以大大受益于隐私的FL方法。但是,FL中的典型模型聚合方法对本地模型更新很敏感,这可能导致学习强大而准确的全局模型失败。在这项工作中,我们在适用于医疗保健数据的FL中实施并评估了不同的强大聚合方法。此外,我们表明这种方法可以在培训期间检测并丢弃故障或恶意的当地客户。我们使用两个实际的医疗保健数据集运行两组实验,以培训医学诊断分类任务。每个数据集都用于在面对不同的中毒攻击时模拟三种不同的鲁棒fl聚集策略的性能。结果表明,保存隐私方法可以与拜占庭式抗体聚合技术一起成功应用。我们特别观察到使用差异隐私(DP)如何显着影响不同聚合策略的最终学习融合。
Federated learning (FL) has enabled training models collaboratively from multiple data owning parties without sharing their data. Given the privacy regulations of patient's healthcare data, learning-based systems in healthcare can greatly benefit from privacy-preserving FL approaches. However, typical model aggregation methods in FL are sensitive to local model updates, which may lead to failure in learning a robust and accurate global model. In this work, we implement and evaluate different robust aggregation methods in FL applied to healthcare data. Furthermore, we show that such methods can detect and discard faulty or malicious local clients during training. We run two sets of experiments using two real-world healthcare datasets for training medical diagnosis classification tasks. Each dataset is used to simulate the performance of three different robust FL aggregation strategies when facing different poisoning attacks. The results show that privacy preserving methods can be successfully applied alongside Byzantine-robust aggregation techniques. We observed in particular how using differential privacy (DP) did not significantly impact the final learning convergence of the different aggregation strategies.