NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models
CoRR(2024)
摘要
The considerable size of Large Language Models (LLMs) presents notable
deployment challenges, particularly on resource-constrained hardware.
Structured pruning, offers an effective means to compress LLMs, thereby
reducing storage costs and enhancing inference speed for more efficient
utilization. In this work, we study data-efficient and resource-efficient
structure pruning methods to obtain smaller yet still powerful models.
Knowledge Distillation is well-suited for pruning, as the intact model can
serve as an excellent teacher for pruned students. However, it becomes
challenging in the context of LLMs due to memory constraints. To address this,
we propose an efficient progressive Numerous-teacher pruning method
(NutePrune). NutePrune mitigates excessive memory costs by loading only one
intact model and integrating it with various masks and LoRA modules, enabling
it to seamlessly switch between teacher and student roles. This approach allows
us to leverage numerous teachers with varying capacities to progressively guide
the pruned model, enhancing overall performance. Extensive experiments across
various tasks demonstrate the effectiveness of NutePrune. In LLaMA-7B zero-shot
experiments, NutePrune retains 97.17
at 20
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要