论文标题
通过非平稳检测和适应性的在线联合学习在概念漂移中
Online Federated Learning via Non-Stationary Detection and Adaptation amidst Concept Drift
论文作者
论文摘要
在人工智能研究的更广泛背景下,联邦学习(FL)是一个新兴领域。与FL有关的方法论假设分布式模型培训,包括客户和服务器的集合,其主要目标是实现最佳的全球模型,并且由于隐私问题而限制了数据共享。值得强调的是,FL中的各种现有文献主要假设固定数据生成过程。在现实世界中,这种假设是不现实的,在现实情况下,由于季节性或周期观测,概念漂移发生在传感器测量中。在本文中,我们介绍了一个多尺度算法框架,该框架结合了\ textit {fedAvg}的理论保证和\ textit {fedomd}算法,在近静止设置中,与非平稳检测和适应技术在概念漂移的存在下降低了fl的一般性性能。我们提出了一个多尺度算法框架,导致$ \ tilde {\ Mathcal {o}}(\ min \ {\ sqrt {lt},δ^{\ frac {1} {1} {3} {3}} {3}} {3}} \ textit {Dynamic Rearmis}对于$ t $ rounds,带有基本的一般凸损失功能,其中$ l $是发生非平稳漂移的次数,而$δ$是$ t $ rounds内经历的漂移的累积幅度。
Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research. Methodologies pertaining to FL assume distributed model training, consisting of a collection of clients and a server, with the main goal of achieving optimal global model with restrictions on data sharing due to privacy concerns. It is worth highlighting that the diverse existing literature in FL mostly assume stationary data generation processes; such an assumption is unrealistic in real-world conditions where concept drift occurs due to, for instance, seasonal or period observations, faults in sensor measurements. In this paper, we introduce a multiscale algorithmic framework which combines theoretical guarantees of \textit{FedAvg} and \textit{FedOMD} algorithms in near stationary settings with a non-stationary detection and adaptation technique to ameliorate FL generalization performance in the presence of concept drifts. We present a multi-scale algorithmic framework leading to $\Tilde{\mathcal{O}} ( \min \{ \sqrt{LT} , Δ^{\frac{1}{3}}T^{\frac{2}{3}} + \sqrt{T} \})$ \textit{dynamic regret} for $T$ rounds with an underlying general convex loss function, where $L$ is the number of times non-stationary drifts occurred and $Δ$ is the cumulative magnitude of drift experienced within $T$ rounds.