Multilingual Instruction Tuning With Just a Pinch of Multilinguality
CoRR(2024)
摘要
As instruction-tuned large language models (LLMs) gain global adoption, their
ability to follow instructions in multiple languages becomes increasingly
crucial. One promising approach is cross-lingual transfer, where a model
acquires specific functionality on some language by finetuning on another
language. In this work, we investigate how multilinguality during instruction
tuning of a multilingual LLM affects instruction-following across languages. We
first show that many languages transfer some instruction-following capabilities
to other languages from even monolingual tuning. Furthermore, we find that only
40 multilingual examples in an English tuning set substantially improve
multilingual instruction-following, both in seen and unseen languages during
tuning. In general, we observe that models tuned on multilingual mixtures
exhibit comparable or superior performance in several languages compared to
monolingually tuned models, despite training on 10x fewer examples in those
languages. Finally, we find that increasing the number of languages in the
instruction tuning set from 1 to only 2, 3, or 4 increases cross-lingual
generalization. Our results suggest that building massively multilingual
instruction-tuned models can be done with only a very small set of multilingual
instruction-responses.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要