QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning
CoRR(2024)
摘要
Diffusion models have achieved remarkable success in image generation tasks,
yet their practical deployment is restrained by the high memory and time
consumption. While quantization paves a way for diffusion model compression and
acceleration, existing methods totally fail when the models are quantized to
low-bits. In this paper, we unravel three properties in quantized diffusion
models that compromise the efficacy of current methods: imbalanced activation
distributions, imprecise temporal information, and vulnerability to
perturbations of specific modules. To alleviate the intensified low-bit
quantization difficulty stemming from the distribution imbalance, we propose
finetuning the quantized model to better adapt to the activation distribution.
Building on this idea, we identify two critical types of quantized layers:
those holding vital temporal information and those sensitive to reduced
bit-width, and finetune them to mitigate performance degradation with
efficiency. We empirically verify that our approach modifies the activation
distribution and provides meaningful temporal information, facilitating easier
and more accurate quantization. Our method is evaluated over three
high-resolution image generation tasks and achieves state-of-the-art
performance under various bit-width settings, as well as being the first method
to generate readable images on full 4-bit (i.e. W4A4) Stable Diffusion.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要