论文标题

分析内核光谱滤波器学习算法的差异原理

Analyzing the discrepancy principle for kernelized spectral filter learning algorithms

论文作者

Celisse, Alain, Wahl, Martin

论文摘要

我们研究了使用迭代学习算法的非参数回归问题中早期停止规则的构建,并且最佳迭代编号未知。更确切地说,我们研究了核谱滤波器学习算法,包括梯度下降,研究差异原理以及基于平滑残差的修改。我们的主要理论界限是针对经验估计误差(固定设计)和预测误差(随机设计)建立的甲骨文不平等现象。从这些有限样本的边界中可以得出的话,经典的差异原理在艰苦的学习情况下对发生缓慢的速率具有统计学适应性,而平滑的差异原理对更快的速度范围具有适应性(分别较高的平滑度参数)。我们的方法依赖于固定设计设置中停止规则的偏差不等式,并结合了按声明的变更参数来处理随机设计设置。

We investigate the construction of early stopping rules in the nonparametric regression problem where iterative learning algorithms are used and the optimal iteration number is unknown. More precisely, we study the discrepancy principle, as well as modifications based on smoothed residuals, for kernelized spectral filter learning algorithms including gradient descent. Our main theoretical bounds are oracle inequalities established for the empirical estimation error (fixed design), and for the prediction error (random design). From these finite-sample bounds it follows that the classical discrepancy principle is statistically adaptive for slow rates occurring in the hard learning scenario, while the smoothed discrepancy principles are adaptive over ranges of faster rates (resp. higher smoothness parameters). Our approach relies on deviation inequalities for the stopping rules in the fixed design setting, combined with change-of-norm arguments to deal with the random design setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源