FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models
arxiv(2023)
摘要
The substantial computational costs of diffusion models, especially due to
the repeated denoising steps necessary for high-quality image generation,
present a major obstacle to their widespread adoption. While several studies
have attempted to address this issue by reducing the number of score function
evaluations (NFE) using advanced ODE solvers without fine-tuning, the decreased
number of denoising iterations misses the opportunity to update fine details,
resulting in noticeable quality degradation. In our work, we introduce an
advanced acceleration technique that leverages the temporal redundancy inherent
in diffusion models. Reusing feature maps with high temporal similarity opens
up a new opportunity to save computation resources without compromising output
quality. To realize the practical benefits of this intuition, we conduct an
extensive analysis and propose a novel method, FRDiff. FRDiff is designed to
harness the advantages of both reduced NFE and feature reuse, achieving a
Pareto frontier that balances fidelity and latency trade-offs in various
generative tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要