SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
CoRR(2024)
摘要
We present SynthCLIP, a novel framework for training CLIP models with
entirely synthetic text-image pairs, significantly departing from previous
methods relying on real data. Leveraging recent text-to-image (TTI) generative
networks and large language models (LLM), we are able to generate synthetic
datasets of images and corresponding captions at any scale, with no human
intervention. With training at scale, SynthCLIP achieves performance comparable
to CLIP models trained on real datasets. We also introduce SynthCI-30M, a
purely synthetic dataset comprising 30 million captioned images. Our code,
trained models, and generated data are released at
https://github.com/hammoudhasan/SynthCLIP
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要