论文标题

FEDSIAM:迈向自适应联盟半监督学习

FedSiam: Towards Adaptive Federated Semi-Supervised Learning

论文作者

Long, Zewei, Che, Liwei, Wang, Yaqing, Ye, Muchao, Luo, Junyu, Wu, Jinze, Xiao, Houping, Ma, Fenglong

论文摘要

联邦学习(FL)已成为一种有效的技术来共同培训机器学习模型,而无需实际共享数据和泄漏隐私。但是,大多数现有的FL方法都集中在监督设置上,而忽略了未标记数据的利用。尽管有一些现有的研究试图将未标记的数据纳入FL,但它们都无法在各种现实世界中保持性能保证或概括能力。在本文中,我们专注于设计一个通用框架,以应对联合半监督学习的不同场景,包括标签 - 客户场景中的四个设置,以及在标签 - 服务器场景中的两个设置。 FEDSIAM建立在暹罗网络中,并进行动量更新,以应对未标记数据引入的非IID挑战。我们进一步提出了一个新的指标,以测量暹罗网络中本地模型层的差异。基于差异,FEDSIAM可以自动选择以自适应方式上传到服务器的图层级参数。在两种情况下,在具有不同数据分布设置的两个方案下,三个数据集的实验结果表明,提议的FedSiam框架的表现优于最先进的基线。

Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy. However, most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data. Although there are a few existing studies trying to incorporate unlabeled data into FL, they all fail to maintain performance guarantees or generalization ability in various real-world settings. In this paper, we focus on designing a general framework FedSiam to tackle different scenarios of federated semi-supervised learning, including four settings in the labels-at-client scenario and two setting in the labels-at-server scenario. FedSiam is built upon a siamese network into FL with a momentum update to handle the non-IID challenges introduced by unlabeled data. We further propose a new metric to measure the divergence of local model layers within the siamese network. Based on the divergence, FedSiam can automatically select layer-level parameters to be uploaded to the server in an adaptive manner. Experimental results on three datasets under two scenarios with different data distribution settings demonstrate that the proposed FedSiam framework outperforms state-of-the-art baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源