On the Convergence of Adam and Beyond
international conference on learning representations, 2018.
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam, etc are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. It has been empirically observed that sometimes these algorith...More
PPT (Upload PPT)