AddSR: Accelerating Diffusion-based Blind Super-Resolution with Adversarial Diffusion Distillation
arxiv(2024)
摘要
Blind super-resolution methods based on stable diffusion showcase formidable
generative capabilities in reconstructing clear high-resolution images with
intricate details from low-resolution inputs. However, their practical
applicability is often hampered by poor efficiency, stemming from the
requirement of thousands or hundreds of sampling steps. Inspired by the
efficient text-to-image approach adversarial diffusion distillation (ADD), we
design AddSR to address this issue by incorporating the ideas of both
distillation and ControlNet. Specifically, we first propose a prediction-based
self-refinement strategy to provide high-frequency information in the student
model output with marginal additional time cost. Furthermore, we refine the
training process by employing HR images, rather than LR images, to regulate the
teacher model, providing a more robust constraint for distillation. Second, we
introduce a timestep-adapting loss to address the perception-distortion
imbalance problem introduced by ADD. Extensive experiments demonstrate our
AddSR generates better restoration results, while achieving faster speed than
previous SD-based state-of-the-art models (e.g., 7x faster than SeeSR).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要