Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation
arxiv(2024)
摘要
To accommodate real-world dynamics, artificial intelligence systems need to
cope with sequentially arriving content in an online manner. Beyond regular
Continual Learning (CL) attempting to address catastrophic forgetting with
offline training of each task, Online Continual Learning (OCL) is a more
challenging yet realistic setting that performs CL in a one-pass data stream.
Current OCL methods primarily rely on memory replay of old training samples.
However, a notable gap from CL to OCL stems from the additional
overfitting-underfitting dilemma associated with the use of rehearsal buffers:
the inadequate learning of new training samples (underfitting) and the repeated
learning of a few old training samples (overfitting). To this end, we introduce
a novel approach, Multi-level Online Sequential Experts (MOSE), which
cultivates the model as stacked sub-experts, integrating multi-level
supervision and reverse self-distillation. Supervision signals across multiple
stages facilitate appropriate convergence of the new task while gathering
various strengths from experts by knowledge distillation mitigates the
performance decline of old tasks. MOSE demonstrates remarkable efficacy in
learning new samples and preserving past knowledge through multi-level experts,
thereby significantly advancing OCL performance over state-of-the-art baselines
(e.g., up to 7.3
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要