论文标题
具有最佳收敛性变异不平等的自适应和通用算法
Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence
论文作者
论文摘要
我们开发了与单调操作员的变异不平等的新自适应算法,这些算法捕获了许多感兴趣的问题,尤其是凸优化和凸连接点鞍点问题。我们的算法会自动适应未知的问题参数,例如操作员的平滑度和规范以及随机评估Oracle的方差。我们表明,我们的算法是通用的,并且同时达到了非平滑,平滑和随机设置中的最佳收敛速率。我们的算法的收敛保证通过$ω(\ sqrt {\ ln t})$ ractor对现有自适应方法的改善,与最佳的非自适应算法匹配。此外,先前的工作要求优化域是有限的。在这项工作中,我们消除了此限制,并为自适应和普遍的无界域提供算法。我们的通用证明技术可用于使用每次迭代的一个或两个操作员评估,用于算法的许多变体。基于外部/irriverprox算法的经典方法需要每个迭代的两个操作员评估,这是许多设置中运行时间的主要因素。
We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods by a $Ω(\sqrt{\ln T})$ factor, matching the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.