NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning

NICE(2023)

引用 0|浏览17
暂无评分
摘要
Continual learning (sequential learning of tasks) is challenging for deep neural networks, mainly because of catastrophic forgetting, the tendency for accuracy on previously trained tasks to drop when new tasks are learned. Although several biologically-inspired techniques have been proposed for mitigating catastrophic forgetting, they typically require additional memory and/or computational overhead. Here, we propose a novel regularization approach that combines neuronal activation-based importance measurement with neuron state-dependent learning mechanisms to alleviate catastrophic forgetting in both task-aware and task-agnostic scenarios. We introduce a neuronal state-dependent mechanism driven by neuronal activity traces and selective learning rules, with storage requirements for regularization parameters that grow slower with network size - compared to schemes that calculate weight importance, whose storage grows quadratically. The proposed model, NEO, is able to achieve performance comparable to other state-of-the-art regularization based approaches to catastrophic forgetting, while operating with a reduced memory overhead.
更多
查看译文
关键词
catastrophic forgetting, task agnostic, Task Incremental learning, Domain Incremental learning, Neuron Importance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要