论文标题
为每个优化器一个标准,每个规范的概括
To Each Optimizer a Norm, To Each Norm its Generalization
论文作者
论文摘要
我们研究了对未经参考和过度参数的训练数据插值的线性模型的优化方法的隐式正规化。由于很难确定优化器是否会收敛到最小化已知规范的解决方案,因此我们将问题汇总并研究了什么是通过插值解决方案最小化的相应规范。使用这种推理,我们证明,对于过度参数化的线性回归,可以使用线性跨度的投影在不同的插值溶液之间移动。对于参数不足的线性分类,我们证明,对于分隔数据的任何线性分类器,存在一个二次范围的家族||。|| _p,使得分类器的方向与最大P-Margin解决方案的方向相同。对于线性分类,我们认为分析与标准最大L2-Margin的收敛性是任意的,并表明将数据诱导的规范最小化,从而可以更好地概括。此外,对于过度参数化的线性分类,对数据跨度的投影使我们能够使用参数不足的设置中的技术。从经验方面来说,我们提出技术,以使优化者偏向更好地概括解决方案,从而提高其测试性能。我们通过合成实验验证理论结果,并使用神经切线内核来处理非线性模型。
We study the implicit regularization of optimization methods for linear models interpolating the training data in the under-parametrized and over-parametrized regimes. Since it is difficult to determine whether an optimizer converges to solutions that minimize a known norm, we flip the problem and investigate what is the corresponding norm minimized by an interpolating solution. Using this reasoning, we prove that for over-parameterized linear regression, projections onto linear spans can be used to move between different interpolating solutions. For under-parameterized linear classification, we prove that for any linear classifier separating the data, there exists a family of quadratic norms ||.||_P such that the classifier's direction is the same as that of the maximum P-margin solution. For linear classification, we argue that analyzing convergence to the standard maximum l2-margin is arbitrary and show that minimizing the norm induced by the data results in better generalization. Furthermore, for over-parameterized linear classification, projections onto the data-span enable us to use techniques from the under-parameterized setting. On the empirical side, we propose techniques to bias optimizers towards better generalizing solutions, improving their test performance. We validate our theoretical results via synthetic experiments, and use the neural tangent kernel to handle non-linear models.