The Future of Large Language Model Pre-training is Federated
CoRR(2024)
摘要
Generative pre-trained large language models (LLMs) have demonstrated
impressive performance over a wide range of tasks, thanks to the unprecedented
amount of data they have been trained on. As established scaling laws indicate,
LLMs' future performance improvement depends on the amount of computing and
data sources we can leverage for pre-training. Federated learning (FL) has the
potential to unleash the majority of the planet's data and computational
resources, which are underutilized by the data-center-focused training
methodology of current LLM practice. Our work presents a robust, flexible,
reproducible FL approach that enables large-scale collaboration across
institutions to train LLMs. This would mobilize more computational and data
resources while matching or potentially exceeding centralized performance. We
further show the effectiveness of the federated training scales with model size
and present our approach for training a billion-scale federated LLM using
limited resources. This will help data-rich actors to become the protagonists
of LLMs pre-training instead of leaving the stage to compute-rich actors alone.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要