论文标题
高斯混合物分类中随机梯度下降的动力平均场理论
Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification
论文作者
论文摘要
我们以封闭形式分析了单层神经网络的随机梯度下降(SGD)的学习动力学,该神经网络对高维高斯混合物进行了分类,其中每个群集都被分配了两个标签之一。这个问题提供了具有插值方案和较大泛化差距的非凸损失格局的原型。我们定义了一个特定的随机过程,可以将SGD扩展到我们称为随机梯度流的连续时间极限。在全零件限制中,我们恢复标准梯度流。我们将动态平均场理论从统计物理学中运用,以通过自洽随机过程在高维极限下跟踪算法的动力学。我们探索算法的性能,这是控制参数的函数,该算法阐明了它如何导航损失格局。
We analyze in a closed form the learning dynamics of stochastic gradient descent (SGD) for a single-layer neural network classifying a high-dimensional Gaussian mixture where each cluster is assigned one of two labels. This problem provides a prototype of a non-convex loss landscape with interpolating regimes and a large generalization gap. We define a particular stochastic process for which SGD can be extended to a continuous-time limit that we call stochastic gradient flow. In the full-batch limit, we recover the standard gradient flow. We apply dynamical mean-field theory from statistical physics to track the dynamics of the algorithm in the high-dimensional limit via a self-consistent stochastic process. We explore the performance of the algorithm as a function of the control parameters shedding light on how it navigates the loss landscape.