APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
CoRR(2024)
摘要
Large Language Models (LLMs) have greatly advanced the natural language
processing paradigm. However, the high computational load and huge model sizes
pose a grand challenge for deployment on edge devices. To this end, we propose
APTQ (Attention-aware Post-Training Mixed-Precision Quantization) for LLMs,
which considers not only the second-order information of each layer's weights,
but also, for the first time, the nonlinear effect of attention outputs on the
entire model. We leverage the Hessian trace as a sensitivity metric for
mixed-precision quantization, ensuring an informed precision reduction that
retains model performance. Experiments show APTQ surpasses previous
quantization methods, achieving an average of 4 bit width a 5.22 perplexity
nearly equivalent to full precision in the C4 dataset. In addition, APTQ
attains state-of-the-art zero-shot accuracy of 68.24% and 70.48% at an
average bitwidth of 3.8 in LLaMa-7B and LLaMa-13B, respectively, demonstrating
its effectiveness to produce high-quality quantized LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要