Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction
arxiv(2024)
摘要
We present Visual AutoRegressive modeling (VAR), a new generation paradigm
that redefines the autoregressive learning on images as coarse-to-fine
"next-scale prediction" or "next-resolution prediction", diverging from the
standard raster-scan "next-token prediction". This simple, intuitive
methodology allows autoregressive (AR) transformers to learn visual
distributions fast and generalize well: VAR, for the first time, makes AR
models surpass diffusion transformers in image generation. On ImageNet 256x256
benchmark, VAR significantly improve AR baseline by improving Frechet inception
distance (FID) from 18.65 to 1.80, inception score (IS) from 80.4 to 356.4,
with around 20x faster inference speed. It is also empirically verified that
VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions
including image quality, inference speed, data efficiency, and scalability.
Scaling up VAR models exhibits clear power-law scaling laws similar to those
observed in LLMs, with linear correlation coefficients near -0.998 as solid
evidence. VAR further showcases zero-shot generalization ability in downstream
tasks including image in-painting, out-painting, and editing. These results
suggest VAR has initially emulated the two important properties of LLMs:
Scaling Laws and zero-shot task generalization. We have released all models and
codes to promote the exploration of AR/VAR models for visual generation and
unified learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要