Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
arxiv(2023)
摘要
Recently, the development of open-source large language models (LLMs) has
advanced rapidly. Nevertheless, due to data constraints, the capabilities of
most open-source LLMs are primarily focused on English. To address this issue,
we introduce the concept of chat vector to equip pre-trained language models
with instruction following and human value alignment via simple model
arithmetic. The chat vector is derived by subtracting the weights of a
pre-trained base model (e.g. LLaMA2) from those of its corresponding chat model
(e.g. LLaMA2-chat). By simply adding the chat vector to a continual pre-trained
model's weights, we can endow the model with chat capabilities in new languages
without the need for further training. Our empirical studies demonstrate the
superior efficacy of the chat vector from three different aspects: instruction
following, toxicity mitigation, and multi-turn dialogue. Moreover, to showcase
the adaptability of our approach, we extend our experiments to encompass
various languages, base models, and chat vectors. The results underscore the
chat vector's simplicity, effectiveness, and wide applicability, making it a
compelling solution for efficiently enabling conversational capabilities in
pre-trained language models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要