论文标题
平滑损失功能的联合优化
Federated Optimization of Smooth Loss Functions
论文作者
论文摘要
在这项工作中,我们在联合学习框架内研究了经验风险最小化(ERM),其中中央服务器使用跨$ M $客户端存储的培训数据将ERM目标函数最小化。在这种情况下,联合的平均(fedave)算法是确定$ε$ - Approximate解决方案的主要解决方案。与标准优化算法相似,FedAve的收敛分析仅依赖于优化参数中损耗函数的平滑度。但是,在培训数据中,损失功能通常也很顺利。为了利用这种额外的平滑度,我们提出了联合的低级梯度下降(FedLRGD)算法。由于数据中的平滑度在损耗功能上引起了近似低级的结构,因此我们的方法首先在服务器和客户端之间执行几轮通信,以学习服务器可以使用的权重,以近似客户的梯度。然后,我们的方法使用不精确梯度下降解决了服务器上的ERM问题。为了表明联邦助理可以具有卓越的性能,我们提出了一个联合甲骨文复杂性的概念,作为规范甲骨文复杂性的对应物。在一些关于损失函数的假设下,例如,参数的强凸流,$η$-Hölder的数据等等。亚优势因素),其中$ ϕ \ gg 1 $是“通信与兼容率”,$ p $是参数维度,而$ d $是数据维度。然后,我们表明,当$ d $很小并且数据中的损耗函数足够光滑时,Fedlrgd在Federated Oracle复杂性中击败FedAve。最后,在分析联邦联合会的过程中,我们还建立了潜在变量模型的低等级近似值的结果。
In this work, we study empirical risk minimization (ERM) within a federated learning framework, where a central server minimizes an ERM objective function using training data that is stored across $m$ clients. In this setting, the Federated Averaging (FedAve) algorithm is the staple for determining $ε$-approximate solutions to the ERM problem. Similar to standard optimization algorithms, the convergence analysis of FedAve only relies on smoothness of the loss function in the optimization parameter. However, loss functions are often very smooth in the training data too. To exploit this additional smoothness, we propose the Federated Low Rank Gradient Descent (FedLRGD) algorithm. Since smoothness in data induces an approximate low rank structure on the loss function, our method first performs a few rounds of communication between the server and clients to learn weights that the server can use to approximate clients' gradients. Then, our method solves the ERM problem at the server using inexact gradient descent. To show that FedLRGD can have superior performance to FedAve, we present a notion of federated oracle complexity as a counterpart to canonical oracle complexity. Under some assumptions on the loss function, e.g., strong convexity in parameter, $η$-Hölder smoothness in data, etc., we prove that the federated oracle complexity of FedLRGD scales like $ϕm(p/ε)^{Θ(d/η)}$ and that of FedAve scales like $ϕm(p/ε)^{3/4}$ (neglecting sub-dominant factors), where $ϕ\gg 1$ is a "communication-to-computation ratio," $p$ is the parameter dimension, and $d$ is the data dimension. Then, we show that when $d$ is small and the loss function is sufficiently smooth in the data, FedLRGD beats FedAve in federated oracle complexity. Finally, in the course of analyzing FedLRGD, we also establish a result on low rank approximation of latent variable models.