Chrome Extension
WeChat Mini Program
Use on ChatGLM

IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs

ICLR(2024)

Cited 0|Views17
No score
Abstract
One limitation of existing transformer-based models is that they cannot handle very long sequences as input since their self-attention operations exhibit quadratic time and space complexity. This problem becomes especially acute when transformers are deployed on hardware platforms equipped only with CPUs. To address this issue, we propose a novel method for accelerating self-attention at inference time that works with pretrained transformer models out-of-the-box without requiring retraining. We experiment using our method to accelerate various long-sequence transformers on various benchmarks and demonstrate a greater speedup compared to the baselines.
More
Translated text
Key words
Efficient Transformers,Inference-time Efficiency,CPU
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined