论文标题
通过联合学习中的异质量化来确保汇总
Secure Aggregation with Heterogeneous Quantization in Federated Learning
论文作者
论文摘要
许多用户的安全模型聚合是联合学习系统的关键组成部分。基于加法掩蔽的安全模型聚合的最新协议要求所有用户将其模型更新量化为相同级别的量化级别。由于缺乏对不同用户的可用带宽的适应性,这严重降低了其性能。我们提出了三个方案,可以在使用异质量化时允许安全模型聚集。这使用户能够调整其与可用带宽成正比的量化,这可以在培训和通信时间的准确性和通信时间之间提供更好的权衡。所提出的方案是基于分组策略,通过将网络分组为组,并将用户的本地模型更新分为细分市场。它没有将聚合协议应用于整个本地模型更新向量,而是将其应用于用户之间具有特定协调的段。我们从理论上评估了我们的方案的量化错误,还证明了如何利用我们的方案来克服拜占庭用户。
Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the same level of quantization. This severely degrades their performance due to lack of adaptation to available bandwidth at different users. We propose three schemes that allow secure model aggregation while using heterogeneous quantization. This enables the users to adjust their quantization proportional to their available bandwidth, which can provide a substantially better trade-off between the accuracy of training and the communication time. The proposed schemes are based on a grouping strategy by partitioning the network into groups, and partitioning the local model updates of users into segments. Instead of applying aggregation protocol to the entire local model update vector, it is applied on segments with specific coordination between users. We theoretically evaluate the quantization error for our schemes, and also demonstrate how our schemes can be utilized to overcome Byzantine users.