A Dataflow Compiler for Efficient LLM Inference using Custom Microscaling Formats
arxiv(2023)
摘要
Model quantization represents both parameters (weights) and intermediate
values (activations) in a more compact format, thereby directly reducing both
computational and memory cost in hardware. The quantization of recent large
language models (LLMs) faces challenges to achieve competitive memory density
compared to other models such as convolutional neural networks, since values in
LLMs require larger dynamic ranges.
Current hardware can expedite computation for LLMs using compact numerical
formats such as low-bitwidth integers or floating-point numbers. Each has
advantages: integer operations simplify circuit design, whereas floating-point
calculations can enhance accuracy when a wider dynamic range is required. In
this work, we seek an efficient data format that combines the best of both
worlds: Microscaling (MX) formats. MX formats are efficient data formats that
achieve both large dynamic ranges and high memory density.
In this paper, we propose a compiler named MASE for exploring mixed-precision
MX formats on dataflow hardware accelerators for LLM inference. Our main
contributions are twofold. First, we propose a novel orchestration abstraction
to explore both software and hardware optimizations with new data formats.
Second, MASE achieves LLM inference at an average precision of 4-bits, with
minimal to no accuracy degradation. To our knowledge, MASE represents the first
effort to harness fine-grain multi-precision MX formats in the design of LLM
hardware accelerators. Over a range of LLMs and datasets, MASE achieves an
average improvement of 24
energy efficiency compared to designs using 8-bit fixed-point numbers.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要