Adagrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes

arXiv: Machine Learning(2019)

引用 340|浏览115
暂无评分
摘要
Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing theoretical guarantees for the convergence of AdaGrad for smooth, nonconvex functions. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the O(log(N)/root N) rate in the stochastic setting, and at the optimal O(1/N) rate in the batch (non-stochastic) setting - in this sense, our convergence guarantees are "sharp". In particular, the convergence of AdaGrad-Norm is robust to the choice of all hyperparameters of the algorithm, in contrast to stochastic gradient descent whose convergence depends crucially on tuning the step-size to the (generally unknown) Lipschitz smoothness constant and level of stochastic noise on the gradient. Extensive numerical experiments are provided to corroborate our theoretical findings; moreover, the experiments suggest that the robustness of AdaGrad-Norm extends to the models in deep learning.
更多
查看译文
关键词
nonconvex optimization, stochastic offline learning, large-scale optimization, adaptive gradient descent, convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要