U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers
arxiv(2024)
摘要
Diffusion Transformers (DiTs) introduce the transformer architecture to
diffusion tasks for latent-space image generation. With an isotropic
architecture that chains a series of transformer blocks, DiTs demonstrate
competitive performance and good scalability; but meanwhile, the abandonment of
U-Net by DiTs and their following improvements is worth rethinking. To this
end, we conduct a simple toy experiment by comparing a U-Net architectured DiT
with an isotropic one. It turns out that the U-Net architecture only gain a
slight advantage amid the U-Net inductive bias, indicating potential
redundancies within the U-Net-style DiT. Inspired by the discovery that U-Net
backbone features are low-frequency-dominated, we perform token downsampling on
the query-key-value tuple for self-attention and bring further improvements
despite a considerable amount of reduction in computation. Based on
self-attention with downsampled tokens, we propose a series of U-shaped DiTs
(U-DiTs) in the paper and conduct extensive experiments to demonstrate the
extraordinary performance of U-DiT models. The proposed U-DiT could outperform
DiT-XL/2 with only 1/6 of its computation cost. Codes are available at
https://github.com/YuchuanTian/U-DiT.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要