Unlocking the Global Synergies in Low-Rank Adapters
CoRR(2024)
摘要
Low-rank Adaption (LoRA) has been the de-facto parameter-efficient
fine-tuning technique for large language models. We present HeteroLoRA, a
light-weight search algorithm that leverages zero-cost proxies to allocate the
limited LoRA trainable parameters across the model for better fine-tuned
performance. In addition to the allocation for the standard LoRA-adapted
models, we also demonstrate the efficacy of HeteroLoRA by performing the
allocation in a more challenging search space that includes LoRA modules and
LoRA-adapted shortcut connections. Experiments show that HeteroLoRA enables
improvements in model performance given the same parameter budge. For example,
on MRPC, we see an improvement of 1.6
parameter budget. We will open-source our algorithm once the paper is accepted.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要