Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models
arxiv(2024)
摘要
The task of few-shot image classification and segmentation (FS-CS) involves
classifying and segmenting target objects in a query image, given only a few
examples of the target classes. We introduce the Vision-Instructed Segmentation
and Evaluation (VISE) method that transforms the FS-CS problem into the Visual
Question Answering (VQA) problem, utilising Vision-Language Models (VLMs), and
addresses it in a training-free manner. By enabling a VLM to interact with
off-the-shelf vision models as tools, the proposed method is capable of
classifying and segmenting target objects using only image-level labels.
Specifically, chain-of-thought prompting and in-context learning guide the VLM
to answer multiple-choice questions like a human; vision models such as YOLO
and Segment Anything Model (SAM) assist the VLM in completing the task. The
modular framework of the proposed method makes it easily extendable. Our
approach achieves state-of-the-art performance on the Pascal-5i and COCO-20i
datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要