NTK-Guided Few-Shot Class Incremental Learning
arxiv(2024)
摘要
While anti-amnesia FSCIL learners often excel in incremental sessions, they
tend to prioritize mitigating knowledge attrition over harnessing the model's
potential for knowledge acquisition. In this paper, we delve into the
foundations of model generalization in FSCIL through the lens of the Neural
Tangent Kernel (NTK). Our primary design focus revolves around ensuring optimal
NTK convergence and NTK-related generalization error, serving as the
theoretical bedrock for exceptional generalization. To attain globally optimal
NTK convergence, we employ a meta-learning mechanism grounded in mathematical
principles to guide the optimization process within an expanded network.
Furthermore, to reduce the NTK-related generalization error, we commence from
the foundational level, optimizing the relevant factors constituting its
generalization loss. Specifically, we initiate self-supervised pre-training on
the base session to shape the initial network weights. Then they are carefully
refined through curricular alignment, followed by the application of dual NTK
regularization tailored specifically for both convolutional and linear layers.
Through the combined effects of these measures, our network acquires robust NTK
properties, significantly enhancing its foundational generalization. On popular
FSCIL benchmark datasets, our NTK-FSCIL surpasses contemporary state-of-the-art
approaches, elevating end-session accuracy by 2.9
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要