ImageInWords: Unlocking Hyper-Detailed Image Descriptions
CoRR(2024)
摘要
Despite the longstanding adage "an image is worth a thousand words," creating
accurate and hyper-detailed image descriptions for training Vision-Language
models remains challenging. Current datasets typically have web-scraped
descriptions that are short, low-granularity, and often contain details
unrelated to the visual content. As a result, models trained on such data
generate descriptions replete with missing information, visual inconsistencies,
and hallucinations. To address these issues, we introduce ImageInWords (IIW), a
carefully designed human-in-the-loop annotation framework for curating
hyper-detailed image descriptions and a new dataset resulting from this
process. We validate the framework through evaluations focused on the quality
of the dataset and its utility for fine-tuning with considerations for
readability, comprehensiveness, specificity, hallucinations, and
human-likeness. Our dataset significantly improves across these dimensions
compared to recently released datasets (+66
Furthermore, models fine-tuned with IIW data excel by +31
along the same human evaluation dimensions. Given our fine-tuned models, we
also evaluate text-to-image generation and vision-language reasoning. Our
model's descriptions can generate images closest to the original, as judged by
both automated and human metrics. We also find our model produces more
compositionally rich descriptions, outperforming the best baseline by up to 6
on ARO, SVO-Probes, and Winoground datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要