Understanding the Generalization Benefits of Late Learning Rate Decay
CoRR(2024)
摘要
Why do neural networks trained with large learning rates for a longer time
often lead to better generalization? In this paper, we delve into this question
by examining the relation between training and testing loss in neural networks.
Through visualization of these losses, we note that the training trajectory
with a large learning rate navigates through the minima manifold of the
training loss, finally nearing the neighborhood of the testing loss minimum.
Motivated by these findings, we introduce a nonlinear model whose loss
landscapes mirror those observed for real neural networks. Upon investigating
the training process using SGD on our model, we demonstrate that an extended
phase with a large learning rate steers our model towards the minimum norm
solution of the training loss, which may achieve near-optimal generalization,
thereby affirming the empirically observed benefits of late learning rate
decay.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要