AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning
arxiv(2024)
摘要
Recently, pre-trained vision-language models (e.g., CLIP) have shown great
potential in few-shot learning and attracted a lot of research interest.
Although efforts have been made to improve few-shot ability of CLIP, key
factors on the effectiveness of existing methods have not been well studied,
limiting further exploration of CLIP's potential in few-shot learning. In this
paper, we first introduce a unified formulation to analyze CLIP-based few-shot
learning methods from a perspective of logit bias, which encourages us to learn
an effective logit bias for further improving performance of CLIP-based
few-shot learning methods. To this end, we disassemble three key components
involved in computation of logit bias (i.e., logit features, logit predictor,
and logit fusion) and empirically analyze the effect on performance of few-shot
classification. Based on analysis of key components, this paper proposes a
novel AMU-Tuning method to learn effective logit bias for CLIP-based few-shot
classification. Specifically, our AMU-Tuning predicts logit bias by exploiting
the appropriate Auxiliary features, which are fed into
an efficient feature-initialized linear classifier with
Multi-branch training. Finally, an
Uncertainty-based fusion is developed to incorporate
logit bias into CLIP for few-shot classification. The experiments are conducted
on several widely used benchmarks, and the results show AMU-Tuning clearly
outperforms its counterparts while achieving state-of-the-art performance of
CLIP-based few-shot learning without bells and whistles.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要