Active Prompt Learning in Vision Language Models
CVPR 2024(2023)
摘要
Pre-trained Vision Language Models (VLMs) have demonstrated notable progress
in various zero-shot tasks, such as classification and retrieval. Despite their
performance, because improving performance on new tasks requires task-specific
knowledge, their adaptation is essential. While labels are needed for the
adaptation, acquiring them is typically expensive. To overcome this challenge,
active learning, a method of achieving a high performance by obtaining labels
for a small number of samples from experts, has been studied. Active learning
primarily focuses on selecting unlabeled samples for labeling and leveraging
them to train models. In this study, we pose the question, "how can the
pre-trained VLMs be adapted under the active learning framework?" In response
to this inquiry, we observe that (1) simply applying a conventional active
learning framework to pre-trained VLMs even may degrade performance compared to
random selection because of the class imbalance in labeling candidates, and (2)
the knowledge of VLMs can provide hints for achieving the balance before
labeling. Based on these observations, we devise a novel active learning
framework for VLMs, denoted as PCB. To assess the effectiveness of our
approach, we conduct experiments on seven different real-world datasets, and
the results demonstrate that PCB surpasses conventional active learning and
random sampling methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要