论文标题

SketchySGD:通过随机曲率估计值可靠的随机优化

SketchySGD: Reliable Stochastic Optimization via Randomized Curvature Estimates

论文作者

Frangella, Zachary, Rathore, Pratik, Zhao, Shipu, Udell, Madeleine

论文摘要

SketchySGD通过对中采样的Hessian进行随机的低级别近似值,并引入自动化的步骤,从而改善了机器学习中现有的随机梯度方法,并引入了在各种凸机机器学习问题中均能正常工作的自动化步骤。从理论上讲,用固定步骤尺寸的SketchySGD线性收敛到最佳的小球。此外,对于最小二乘问题,我们显示出差的设置,我们显示的SketchysGD收敛速度要比SGD快。我们通过对实际数据的脊回归实验从经验上验证了这一改进。在山脊和逻辑回归问题上都使用密集和稀疏数据进行数值实验,表明配备了默认的超参数的SketchysGD也可以比流行的随机梯度方法获得可比较或更好的结果,即使他们已经调谐以产生最佳性能。特别是,SketchySGD能够使用数据矩阵来解决不良条件的逻辑回归问题,该数据矩阵需要超过840美元的GB RAM来存储,而其竞争对手即使调整了,也无法取得任何进展。 SketchysGD具有默认的超参数和在不良条件问题上脱颖而出的能力,比其他随机梯度方法具有优势,其中大多数需要仔细的超参数调整(尤其是学习率的学习率)才能获得良好的表现并在存在不良条件的情况下降级。

SketchySGD improves upon existing stochastic gradient methods in machine learning by using randomized low-rank approximations to the subsampled Hessian and by introducing an automated stepsize that works well across a wide range of convex machine learning problems. We show theoretically that SketchySGD with a fixed stepsize converges linearly to a small ball around the optimum. Further, in the ill-conditioned setting we show SketchySGD converges at a faster rate than SGD for least-squares problems. We validate this improvement empirically with ridge regression experiments on real data. Numerical experiments on both ridge and logistic regression problems with dense and sparse data, show that SketchySGD equipped with its default hyperparameters can achieve comparable or better results than popular stochastic gradient methods, even when they have been tuned to yield their best performance. In particular, SketchySGD is able to solve an ill-conditioned logistic regression problem with a data matrix that takes more than $840$GB RAM to store, while its competitors, even when tuned, are unable to make any progress. SketchySGD's ability to work out-of-the box with its default hyperparameters and excel on ill-conditioned problems is an advantage over other stochastic gradient methods, most of which require careful hyperparameter tuning (especially of the learning rate) to obtain good performance and degrade in the presence of ill-conditioning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源