SALMON: Self-Alignment with Instructable Reward Models
arxiv(2023)
摘要
Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON, to align base language
models with minimal human supervision, using only a small set of human-defined
principles, yet achieving superior performance. Central to our approach is an
instructable reward model. Trained on synthetic preference data, this model can
generate reward scores based on arbitrary human-defined principles. By merely
adjusting these principles during the RL training phase, we gain full control
over the preferences with the instructable reward model, subsequently
influencing the behavior of the RL-trained policy models, and reducing the
reliance on the collection of online human preferences. Applying our method to
the LLaMA-2-70b base language model, we developed an AI assistant named
Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined
principles, Dromedary-2 significantly surpasses the performance of several
state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark
datasets. We have open-sourced the code and model weights to encourage further
research into aligning LLM-based AI agents with enhanced supervision
efficiency, improved controllability, and scalable oversight.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要