CacheGen: Fast Context Loading for Language Model Applications via KV Cache Streaming
arxiv(2023)
摘要
As large language models (LLMs) take on complex tasks, their inputs are
supplemented with longer contexts that incorporate domain knowledge or
user-specific information. Yet using long contexts poses a challenge for
responsive LLM systems, as nothing can be generated until the whole context is
processed by the LLM. While the context-processing delay can be reduced by
reusing the KV cache of a context across different inputs, fetching the KV
cache, which contains large tensors, over the network can cause extra network
delays.
CacheGen is a fast context-loading module for LLM systems. First, CacheGen
uses a custom tensor encoder, which embraces KV cache's distributional
properties, to encode a KV cache into more compact bitstream representations
with negligible encoding/decoding overhead. This reduces the bandwidth demand
to fetch the KV cache. Second, to maintain low context-loading delay and high
generation quality, CacheGen adapts the streaming strategies to cope with
changes in available bandwidth. When available bandwidth drops, CacheGen may
raise the compression level for a part of the context or choose to recompute
its KV cache on the fly. We test CacheGen on four popular LLMs of various sizes
and four datasets (662 contexts in total). Compared to the recent systems that
reuse the KV cache, CacheGen reduces the KV cache size by 3.7-4.3x and the
total delay in fetching and processing contexts by 2.7-3.2x while having
negligible impact on the LLM response quality in accuracy or perplexity.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要