论文标题
佛罗里达州游戏:一个联邦学习框架用于分配变化
FL Games: A federated learning framework for distribution shifts
论文作者
论文摘要
联邦学习旨在培训服务器编排下分布在客户端的数据的预测模型。但是,参与的客户通常每个分布都持有数据,从而使具有强大分布概括的预测模型在看不见的域上灾难性失败。在这项工作中,我们认为,为了在非i.i.d中更好地概括。客户,必须仅学习跨域稳定且不变的相关性。我们建议使用FL游戏,这是一个游戏理论框架,用于联合学习的学习因果特征,这些因果特征是在客户之间不变的。在训练以达到NASH平衡的同时,传统的最佳响应策略遭受了高频振荡。我们证明,FL游戏有效地解决了这一挑战,并表现出平滑的性能曲线。此外,FL游戏在客户次数中的规模良好,需要更少的沟通回合,并且对设备异质性不可知。通过经验评估,我们证明了FL Games在各种基准测试中实现了较高的分布性能。
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server. However, participating clients typically each hold data from a different distribution, whereby predictive models with strong in-distribution generalization can fail catastrophically on unseen domains. In this work, we argue that in order to generalize better across non-i.i.d. clients, it is imperative to only learn correlations that are stable and invariant across domains. We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients. While training to achieve the Nash equilibrium, the traditional best response strategy suffers from high-frequency oscillations. We demonstrate that FL Games effectively resolves this challenge and exhibits smooth performance curves. Further, FL Games scales well in the number of clients, requires significantly fewer communication rounds, and is agnostic to device heterogeneity. Through empirical evaluation, we demonstrate that FL Games achieves high out-of-distribution performance on various benchmarks.