SYNC-CLIP: Synthetic Data Make CLIP Generalize Better in Data-Limited Scenarios
CoRR(2023)
摘要
Prompt learning is a powerful technique for transferring Vision-Language
Models (VLMs) such as CLIP to downstream tasks. However, the prompt-based
methods that are fine-tuned solely with base classes may struggle to generalize
to novel classes in open-vocabulary scenarios, especially when data are
limited. To address this issue, we propose an innovative approach called
SYNC-CLIP that leverages SYNthetiC data for enhancing the generalization
capability of CLIP. Based on the observation of the distribution shift between
the real and synthetic samples, we treat real and synthetic samples as distinct
domains and propose to optimize separate domain prompts to capture
domain-specific information, along with the shared visual prompts to preserve
the semantic consistency between two domains. By aligning the cross-domain
features, the synthetic data from novel classes can provide implicit guidance
to rebalance the decision boundaries. Experimental results on three model
generalization tasks demonstrate that our method performs very competitively
across various benchmarks. Notably, SYNC-CLIP outperforms the state-of-the-art
competitor PromptSRC by an average improvement of 3.0% on novel classes across
11 datasets in open-vocabulary scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要