论文标题
激励联合学习
Incentivizing Federated Learning
论文作者
论文摘要
Federated Learning是当今许多应用程序使用的新兴分布式协作学习范式。联邦学习的有效性取决于客户的集体努力以及他们贡献当地数据的意愿。但是,由于隐私问题以及数据收集和模型培训的成本,客户可能并不总是贡献他们拥有的所有数据,这会对全球模型的性能产生负面影响。本文提出了一种激励机制,该机制鼓励客户贡献尽可能多的数据。与以前的激励机制不同,我们的方法不会使数据获利。取而代之的是,我们隐式地使用模型性能作为奖励,即,重要的贡献者将通过更好的模型还清。从理论上讲,我们证明客户将使用我们的激励机制在某些条件下使用尽可能多的数据来参加联邦学习
Federated Learning is an emerging distributed collaborative learning paradigm used by many of applications nowadays. The effectiveness of federated learning relies on clients' collective efforts and their willingness to contribute local data. However, due to privacy concerns and the costs of data collection and model training, clients may not always contribute all the data they possess, which would negatively affect the performance of the global model. This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain. Unlike previous incentive mechanisms, our approach does not monetize data. Instead, we implicitly use model performance as a reward, i.e., significant contributors are paid off with better models. We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions with our incentive mechanism