Step-size Optimization for Continual Learning
CoRR(2024)
摘要
In continual learning, a learner has to keep learning from the data over its
whole life time. A key issue is to decide what knowledge to keep and what
knowledge to let go. In a neural network, this can be implemented by using a
step-size vector to scale how much gradient samples change network weights.
Common algorithms, like RMSProp and Adam, use heuristics, specifically
normalization, to adapt this step-size vector. In this paper, we show that
those heuristics ignore the effect of their adaptation on the overall objective
function, for example by moving the step-size vector away from better step-size
vectors. On the other hand, stochastic meta-gradient descent algorithms, like
IDBD (Sutton, 1992), explicitly optimize the step-size vector with respect to
the overall objective function. On simple problems, we show that IDBD is able
to consistently improve step-size vectors, where RMSProp and Adam do not. We
explain the differences between the two approaches and their respective
limitations. We conclude by suggesting that combining both approaches could be
a promising future direction to improve the performance of neural networks in
continual learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要