谷歌浏览器插件
订阅小程序
在清言上使用

Scaling Language Model Size in Cross-Device Federated Learning

Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)(2022)

引用 9|浏览0
暂无评分
摘要
Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with similar to 10x smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要