Hierarchical Prompts for Rehearsal-free Continual Learning
CoRR(2024)
摘要
Continual learning endeavors to equip the model with the capability to
integrate current task knowledge while mitigating the forgetting of past task
knowledge. Inspired by prompt tuning, prompt-based methods maintain a frozen
backbone and train with slight learnable prompts to minimize the catastrophic
forgetting that arises due to updating a large number of backbone parameters.
Nonetheless, these learnable prompts tend to concentrate on the discriminatory
knowledge of the current task while ignoring past task knowledge, leading to
that learnable prompts still suffering from catastrophic forgetting. This paper
introduces a novel rehearsal-free paradigm for continual learning termed
Hierarchical Prompts (H-Prompts), comprising three categories of prompts –
class prompt, task prompt, and general prompt. To effectively depict the
knowledge of past classes, class prompt leverages Bayesian Distribution
Alignment to model the distribution of classes in each task. To reduce the
forgetting of past task knowledge, task prompt employs Cross-task Knowledge
Excavation to amalgamate the knowledge encapsulated in the learned class
prompts of past tasks and current task knowledge. Furthermore, general prompt
utilizes Generalized Knowledge Exploration to deduce highly generalized
knowledge in a self-supervised manner. Evaluations on two benchmarks
substantiate the efficacy of the proposed H-Prompts, exemplified by an average
accuracy of 87.8
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要