Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models
CoRR(2024)
摘要
In this paper, we propose a novel parameter and computation efficient tuning
method for Multi-modal Large Language Models (MLLMs), termed Efficient
Attention Skipping (EAS). Concretely, we first reveal that multi-head
attentions (MHAs), the main computational overhead of MLLMs, are often
redundant to downstream tasks. Based on this observation, EAS evaluates the
attention redundancy and skips the less important MHAs to speed up inference.
Besides, we also propose a novel propagation-of-information adapter (PIA) to
serve the attention skipping of EAS and keep parameter efficiency, which can be
further re-parameterized into feed-forward networks (FFNs) for zero-extra
latency. To validate EAS, we apply it to a recently proposed MLLM called LaVIN
and a classic VL pre-trained model called METER, and conduct extensive
experiments on a set of benchmarks. The experiments show that EAS not only
retains high performance and parameter efficiency, but also greatly speeds up
inference speed. For instance, LaVIN-EAS can obtain 89.98% accuracy on
ScineceQA while speeding up inference by 2.2 times to LaVIN
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要