CLIPArTT: Light-weight Adaptation of CLIP to New Domains at Test Time
arxiv(2024)
摘要
Pre-trained vision-language models (VLMs), exemplified by CLIP, demonstrate
remarkable adaptability across zero-shot classification tasks without
additional training. However, their performance diminishes in the presence of
domain shifts. In this study, we introduce CLIP Adaptation duRing Test-Time
(CLIPArTT), a fully test-time adaptation (TTA) approach for CLIP, which
involves automatic text prompts construction during inference for their use as
text supervision. Our method employs a unique, minimally invasive text prompt
tuning process, wherein multiple predicted classes are aggregated into a single
new text prompt, used as pseudo label to re-classify inputs in a transductive
manner. Additionally, we pioneer the standardization of TTA benchmarks (e.g.,
TENT) in the realm of VLMs. Our findings demonstrate that, without requiring
additional transformations nor new trainable modules, CLIPArTT enhances
performance dynamically across non-corrupted datasets such as CIFAR-10,
corrupted datasets like CIFAR-10-C and CIFAR-10.1, alongside synthetic datasets
such as VisDA-C. This research underscores the potential for improving VLMs'
adaptability through novel test-time strategies, offering insights for robust
performance across varied datasets and environments. The code can be found at:
https://github.com/dosowiechi/CLIPArTT.git
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要