The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization

Shengyi Huang, Michael Noukhovitch, Arian Hosseini,Kashif Rasul,Weixun Wang, Lewis Tunstall

arxiv(2024)

引用 0|浏览4
暂无评分
摘要
This work is the first to openly reproduce the Reinforcement Learning from Human Feedback (RLHF) scaling behaviors reported in OpenAI's seminal TL;DR summarization work. We create an RLHF pipeline from scratch, enumerate over 20 key implementation details, and share key insights during the reproduction. Our RLHF-trained Pythia models demonstrate significant gains in response quality that scale with model size, with our 2.8B, 6.9B models outperforming OpenAI's released 1.3B checkpoint. We publicly release the trained model checkpoints and code to facilitate further research and accelerate progress in the field ().
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要