SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
arxiv(2024)
摘要
Text-to-image (T2I) models, such as Stable Diffusion, have exhibited
remarkable performance in generating high-quality images from text descriptions
in recent years. However, text-to-image models may be tricked into generating
not-safe-for-work (NSFW) content, particularly in sexual scenarios. Existing
countermeasures mostly focus on filtering inappropriate inputs and outputs, or
suppressing improper text embeddings, which can block explicit NSFW-related
content (e.g., naked or sexy) but may still be vulnerable to adversarial
prompts inputs that appear innocent but are ill-intended. In this paper, we
present SafeGen, a framework to mitigate unsafe content generation by
text-to-image models in a text-agnostic manner. The key idea is to eliminate
unsafe visual representations from the model regardless of the text input. In
this way, the text-to-image model is resistant to adversarial prompts since
unsafe visual representations are obstructed from within. Extensive experiments
conducted on four datasets demonstrate SafeGen's effectiveness in mitigating
unsafe content generation while preserving the high-fidelity of benign images.
SafeGen outperforms eight state-of-the-art baseline methods and achieves 99.1
sexual content removal performance. Furthermore, our constructed benchmark of
adversarial prompts provides a basis for future development and evaluation of
anti-NSFW-generation methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要