REBEL: Reinforcement Learning via Regressing Relative Rewards
arxiv(2024)
摘要
While originally developed for continuous control problems, Proximal Policy
Optimization (PPO) has emerged as the work-horse of a variety of reinforcement
learning (RL) applications including the fine-tuning of generative models.
Unfortunately, PPO requires multiple heuristics to enable stable convergence
(e.g. value networks, clipping) and is notorious for its sensitivity to the
precise implementation of these components. In response, we take a step back
and ask what a minimalist RL algorithm for the era of generative models would
look like. We propose REBEL, an algorithm that cleanly reduces the problem of
policy optimization to regressing the relative rewards via a direct policy
parameterization between two completions to a prompt, enabling strikingly
lightweight implementation. In theory, we prove that fundamental RL algorithms
like Natural Policy Gradient can be seen as variants of REBEL, which allows us
to match the strongest known theoretical guarantees in terms of convergence and
sample complexity in the RL literature. REBEL can also cleanly incorporate
offline data and handle the intransitive preferences we frequently see in
practice. Empirically, we find that REBEL provides a unified approach to
language modeling and image generation with stronger or similar performance as
PPO and DPO, all while being simpler to implement and more computationally
tractable than PPO.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要