论文标题
一次两步
Two steps at a time -- taking GAN training in stride with Tseng's method
论文作者
论文摘要
通过培训生成对抗网络(GAN)的培训,我们研究了解决其他非平滑正规化器来解决最小问题问题的方法。我们通过采用\ emph {单调操作员}理论来做到这一点,尤其是\ emph {forward-backward-fordward(fbf)}方法,该方法通过通过第二梯度评估来纠正每个更新,从而避免了已知的限制循环问题。此外,我们提出了一个看似新的方案,该方案可以回收旧梯度以减轻额外的计算成本。在此过程中,我们重新发现了一种已知的方法,该方法与\ emph {乐观的梯度下降(OGDA)}有关。对于这两种方案,我们都通过统一方法证明了凸孔concove minimax问题的新型收敛速率。派生的误差界限是根据沿峰迭代的差距函数的。对于确定性和随机问题,我们分别显示$ \ Mathcal {O}(1/K)$和$ \ Mathcal {O}(1/\ sqrt {K k})$的收敛速率。我们通过在CIFAR10数据集上的Wasserstein Gans培训中对理论结果进行了补充。
Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers. We do so by employing \emph{monotone operator} theory, in particular the \emph{Forward-Backward-Forward (FBF)} method, which avoids the known issue of limit cycling by correcting each update by a second gradient evaluation. Furthermore, we propose a seemingly new scheme which recycles old gradients to mitigate the additional computational cost. In doing so we rediscover a known method, related to \emph{Optimistic Gradient Descent Ascent (OGDA)}. For both schemes we prove novel convergence rates for convex-concave minimax problems via a unifying approach. The derived error bounds are in terms of the gap function for the ergodic iterates. For the deterministic and the stochastic problem we show a convergence rate of $\mathcal{O}(1/k)$ and $\mathcal{O}(1/\sqrt{k})$, respectively. We complement our theoretical results with empirical improvements in the training of Wasserstein GANs on the CIFAR10 dataset.