Towards Universal Fake Image Detectors that Generalize Across Generative Models
arxiv(2023)
摘要
With generative models proliferating at a rapid rate, there is a growing need
for general purpose fake image detectors. In this work, we first show that the
existing paradigm, which consists of training a deep network for real-vs-fake
classification, fails to detect fake images from newer breeds of generative
models when trained to detect GAN fake images. Upon analysis, we find that the
resulting classifier is asymmetrically tuned to detect patterns that make an
image fake. The real class becomes a sink class holding anything that is not
fake, including generated images from models not accessible during training.
Building upon this discovery, we propose to perform real-vs-fake classification
without learning; i.e., using a feature space not explicitly trained to
distinguish real from fake images. We use nearest neighbor and linear probing
as instantiations of this idea. When given access to the feature space of a
large pretrained vision-language model, the very simple baseline of nearest
neighbor classification has surprisingly good generalization ability in
detecting fake images from a wide variety of generative models; e.g., it
improves upon the SoTA by +15.07 mAP and +25.90
diffusion and autoregressive models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要