Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation
arxiv(2024)
摘要
The literature on text-to-image generation is plagued by issues of faithfully
composing entities with relations. But there lacks a formal understanding of
how entity-relation compositions can be effectively learned. Moreover, the
underlying phenomenon space that meaningfully reflects the problem structure is
not well-defined, leading to an arms race for larger quantities of data in the
hope that generalization emerges out of large-scale pretraining. We hypothesize
that the underlying phenomenological coverage has not been proportionally
scaled up, leading to a skew of the presented phenomenon which harms
generalization. We introduce statistical metrics that quantify both the
linguistic and visual skew of a dataset for relational learning, and show that
generalization failures of text-to-image generation are a direct result of
incomplete or unbalanced phenomenological coverage. We first perform
experiments in a synthetic domain and demonstrate that systematically
controlled metrics are strongly predictive of generalization performance. Then
we move to natural images and show that simple distribution perturbations in
light of our theories boost generalization without enlarging the absolute data
size. This work informs an important direction towards quality-enhancing the
data diversity or balance orthogonal to scaling up the absolute size. Our
discussions point out important open questions on 1) Evaluation of generated
entity-relation compositions, and 2) Better models for reasoning with abstract
relations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要