论文标题
在私人读取更新中的对扭曲折衷的折衷,在联合子模型学习中
Rate Distortion Tradeoff in Private Read Update Write in Federated Submodel Learning
论文作者
论文摘要
我们研究了与联邦子模型学习(FSL)有关的私人阅读更新写入(PRUW)的速率失真权衡。在FSL中,基于用于培训的不同类型的数据,将机器学习(ML)模型分为多个子模型。每个用户仅下载并更新与其本地数据相关的子模型。下载和更新所需子模型的过程,同时保证子模型索引的隐私和更新的值称为PRUW。在这项工作中,我们研究当阅读(下载)和写作(上传)阶段允许预定的失真量时,如何降低PRUW的通信成本。我们表征了PRUW中的利率扭曲权衡以及在给定失真预算下工作时达到最低通信成本的计划。
We investigate the rate distortion tradeoff in private read update write (PRUW) in relation to federated submodel learning (FSL). In FSL a machine learning (ML) model is divided into multiple submodels based on different types of data used for training. Each user only downloads and updates the submodel relevant to its local data. The process of downloading and updating the required submodel while guaranteeing privacy of the submodel index and the values of updates is known as PRUW. In this work, we study how the communication cost of PRUW can be reduced when a pre-determined amount of distortion is allowed in the reading (download) and writing (upload) phases. We characterize the rate distortion tradeoff in PRUW along with a scheme that achieves the lowest communication cost while working under a given distortion budget.