论文标题

FedTP:通过变压器个性化的联合学习

FedTP: Federated Learning by Transformer Personalization

论文作者

Li, Hongxia, Cai, Zhongyi, Wang, Jingya, Tang, Jiangnan, Ding, Weiping, Lin, Chin-Teng, Shi, Ye

论文摘要

联邦学习是一种新兴的学习范式,其中多个客户以隐私的方式协作训练机器学习模型。个性化的联合学习扩展了此范式,以通过学习个性化模型来克服各个客户的异质性。最近,有一些初步尝试将变形金刚应用于联合学习。但是,尚未研究联合学习算法对自我注意力的影响。本文调查了这种关系,并揭示了联邦平均算法实际上对存在数据异质性的自我注意力产生了负面影响。这些影响限制了变压器模型在联合学习设置中的功能。基于此,我们提出了FedTP,这是一种基于变压器的新型联合学习框架,为每个客户学习个性化的自我注意力,同时汇总客户之间的其他参数。我们没有使用一种在本地维护每个客户的个性化自我发挥层的香草个性化机制,而是开发了一种学习对个体化的机制,以进一步鼓励客户之间的合作并增加FedTP的可分配性和概括性。具体而言,通过在服务器上学习一项超级核武器来实现学习对个性化的,该服务器在服务器上输出了个性化投影层的自我发挥层的投影矩阵,以生成客户端查询,键和值。此外,我们介绍了FedTP的概括,并以学习对象的机制为基础。值得注意的是,FedTP提供了一个方便的环境,用于使用相同的联合网络体系结构执行一系列图像和语言任务 - 所有这些都从变形金刚个性化中受益。广泛的实验证明,在非IID场景中,通过学习对个体化机制进行了FEDTP会产生最先进的表现。我们的代码可在线提供。

Federated learning is an emerging learning paradigm where multiple clients collaboratively train a machine learning model in a privacy-preserving manner. Personalized federated learning extends this paradigm to overcome heterogeneity across clients by learning personalized models. Recently, there have been some initial attempts to apply Transformers to federated learning. However, the impacts of federated learning algorithms on self-attention have not yet been studied. This paper investigates this relationship and reveals that federated averaging algorithms actually have a negative impact on self-attention where there is data heterogeneity. These impacts limit the capabilities of the Transformer model in federated learning settings. Based on this, we propose FedTP, a novel Transformer-based federated learning framework that learns personalized self-attention for each client while aggregating the other parameters among the clients. Instead of using a vanilla personalization mechanism that maintains personalized self-attention layers of each client locally, we develop a learn-to-personalize mechanism to further encourage the cooperation among clients and to increase the scablability and generalization of FedTP. Specifically, the learn-to-personalize is realized by learning a hypernetwork on the server that outputs the personalized projection matrices of self-attention layers to generate client-wise queries, keys and values. Furthermore, we present the generalization bound for FedTP with the learn-to-personalize mechanism. Notably, FedTP offers a convenient environment for performing a range of image and language tasks using the same federated network architecture - all of which benefit from Transformer personalization. Extensive experiments verify that FedTP with the learn-to-personalize mechanism yields state-of-the-art performance in non-IID scenarios. Our code is available online.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源