Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation
CoRR(2024)
摘要
The recent advancement of large and powerful models with Text-to-Image (T2I)
generation abilities – such as OpenAI's DALLE-3 and Google's Gemini – enables
users to generate high-quality images from textual prompts. However, it has
become increasingly evident that even simple prompts could cause T2I models to
exhibit conspicuous social bias in generated images. Such bias might lead to
both allocational and representational harms in society, further marginalizing
minority groups. Noting this problem, a large body of recent works has been
dedicated to investigating different dimensions of bias in T2I systems.
However, an extensive review of these studies is lacking, hindering a
systematic understanding of current progress and research gaps. We present the
first extensive survey on bias in T2I generative models. In this survey, we
review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
Specifically, we discuss how these works define, evaluate, and mitigate
different aspects of bias. We found that: (1) while gender and skintone biases
are widely studied, geo-cultural bias remains under-explored; (2) most works on
gender and skintone bias investigated occupational association, while other
aspects are less frequently studied; (3) almost all gender bias works overlook
non-binary identities in their studies; (4) evaluation datasets and metrics are
scattered, with no unified framework for measuring biases; and (5) current
mitigation methods fail to resolve biases comprehensively. Based on current
limitations, we point out future research directions that contribute to
human-centric definitions, evaluations, and mitigation of biases. We hope to
highlight the importance of studying biases in T2I systems, as well as
encourage future efforts to holistically understand and tackle biases, building
fair and trustworthy T2I technologies for everyone.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要