Poro 34B and the Blessing of Multilinguality
CoRR(2024)
摘要
The pretraining of state-of-the-art large language models now requires
trillions of words of text, which is orders of magnitude more than available
for the vast majority of languages. While including text in more than one
language is an obvious way to acquire more pretraining data, multilinguality is
often seen as a curse, and most model training efforts continue to focus
near-exclusively on individual large languages. We believe that multilinguality
can be a blessing and that it should be possible to substantially improve over
the capabilities of monolingual models for small languages through multilingual
training. In this study, we introduce Poro 34B, a 34 billion parameter model
trained for 1 trillion tokens of Finnish, English, and programming languages,
and demonstrate that a multilingual training approach can produce a model that
not only substantially advances over the capabilities of existing models for
Finnish, but also excels in translation and is competitive in its class in
generating English and programming languages. We release the model parameters,
scripts, and data under open licenses at
https://huggingface.co/LumiOpen/Poro-34B.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要