A Critical Evaluation of AI Feedback for Aligning Large Language Models
CoRR(2024)
摘要
Reinforcement learning with AI feedback (RLAIF) is a popular paradigm for
improving the instruction-following abilities of powerful pre-trained language
models. RLAIF first performs supervised fine-tuning (SFT) using demonstrations
from a teacher model and then further fine-tunes the model with reinforcement
learning (RL), using feedback from a critic model. While recent popular
open-source models have demonstrated substantial improvements in performance
from the RL step, in this paper we question whether the complexity of this RL
step is truly warranted for AI feedback. We show that the improvements of the
RL step are virtually entirely due to the widespread practice of using a weaker
teacher model (e.g. GPT-3.5) for SFT data collection than the critic (e.g.,
GPT-4) used for AI feedback generation. Specifically, we show that simple
supervised fine-tuning with GPT-4 as the teacher outperforms existing RLAIF
pipelines. More generally, we find that the gains from RLAIF vary substantially
across base model families, test-time evaluation protocols, and critic models.
Finally, we provide a mechanistic explanation for when SFT may outperform the
full two-step RLAIF pipeline as well as suggestions for making RLAIF maximally
useful in practice.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要