On the Convergence of Adam and Beyond

international conference on learning representations, 2018.

Cited by: 597|Bibtex|Views16|Links
EI

Abstract:

Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam, etc are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. It has been empirically observed that sometimes these algorith...More

Code:

Data:

Your rating :
0

 

Tags
Comments