论文标题
FED-CBS:通过减少集体水平的联合学习的异质性吸引客户抽样机制
Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction
论文作者
论文摘要
由于边缘设备的沟通能力有限,大多数现有的联合学习(FL)方法随机选择了一部分设备以参与每个通信的培训。与参与所有可用客户端相比,随机选择机制可以导致非IID(独立和相同分布)数据的显着性能降解。在本文中,我们表明我们的关键观察结果是,导致这种性能退化的基本原因是从随机选择的客户端分组数据的类不平衡。基于我们的关键观察,我们设计了有效的异质性 - 意识端客户抽样机制,即联合的类平衡采样(FED-CB),可以有效地从有意选择的客户端中有效地降低组数据集的类别不平衡。特别是,我们提出了一种级别的失控度量,然后采用同构加密来以隐私的方式得出这一措施。基于此措施,我们还设计了一种计算有效的客户端抽样策略,以便主动选择的客户将生成具有理论保证的更均衡的分组数据集。广泛的实验结果表明,Fed-CB的表现优于现状方法。此外,与所有可用客户一起参加FL培训的理想环境相比,它可以取得可比甚至更好的性能。
Due to limited communication capacities of edge devices, most existing federated learning (FL) methods randomly select only a subset of devices to participate in training for each communication round. Compared with engaging all the available clients, the random-selection mechanism can lead to significant performance degradation on non-IID (independent and identically distributed) data. In this paper, we show our key observation that the essential reason resulting in such performance degradation is the class-imbalance of the grouped data from randomly selected clients. Based on our key observation, we design an efficient heterogeneity-aware client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS), which can effectively reduce class-imbalance of the group dataset from the intentionally selected clients. In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way. Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees. Extensive experimental results demonstrate Fed-CBS outperforms the status quo approaches. Furthermore, it achieves comparable or even better performance than the ideal setting where all the available clients participate in the FL training.