论文标题
双重平均算法的渐近特性,用于约束分布式随机优化
Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
论文作者
论文摘要
考虑到随着时间变化的随机网络的约束随机优化问题,在该网络中,代理将共同最大程度地减少受共同约束集的目标函数总和,我们研究了基于梯度的双重平均分布式算法的渐近特性。与大多数现有的分布式双重平均算法不同的作品不同,这些算法主要集中在其非质子性特性上,我们不仅证明了几乎确定的收敛性和几乎确定的收敛速率,而且是算法的渐近正常性和渐近效率。首先,对于在随机网络上分布的一般约束凸优化问题,我们证明几乎可以归档共识,并且代理的估计值会收敛到同一最佳点。对于线性约束凸优化的情况,我们表明,平均双序列的镜像图可以识别具有概率1的最佳解决方案的主动约束,这有助于我们证明几乎确定的收敛速率,然后建立算法的渐近正态性。此外,我们还验证了该算法在渐近上是最佳的。据我们所知,这似乎是约束分布式优化算法的第一个渐近正态性结果。最后,提供了一个数值示例,以证明理论分析是合理的。
Considering the constrained stochastic optimization problem over a time-varying random network, where the agents are to collectively minimize a sum of objective functions subject to a common constraint set, we investigate asymptotic properties of a distributed algorithm based on dual averaging of gradients. Different from most existing works on distributed dual averaging algorithms that mainly concentrating on their non-asymptotic properties, we not only prove almost sure convergence and the rate of almost sure convergence, but also asymptotic normality and asymptotic efficiency of the algorithm. Firstly, for general constrained convex optimization problem distributed over a random network, we prove that almost sure consensus can be archived and the estimates of agents converge to the same optimal point. For the case of linear constrained convex optimization, we show that the mirror map of the averaged dual sequence identifies the active constraints of the optimal solution with probability 1, which helps us to prove the almost sure convergence rate and then establish asymptotic normality of the algorithm. Furthermore, we also verify that the algorithm is asymptotically optimal. To the best of our knowledge, it seems to be the first asymptotic normality result for constrained distributed optimization algorithms. Finally, a numerical example is provided to justify the theoretical analysis.