Inverse-RLignment: Inverse Reinforcement Learning from Demonstrations for LLM Alignment
CoRR(2024)
摘要
Aligning Large Language Models (LLMs) is crucial for enhancing their safety
and utility. However, existing methods, primarily based on preference datasets,
face challenges such as noisy labels, high annotation costs, and privacy
concerns. In this work, we introduce Alignment from Demonstrations (AfD), a
novel approach leveraging high-quality demonstration data to overcome these
challenges. We formalize AfD within a sequential decision-making framework,
highlighting its unique challenge of missing reward signals. Drawing insights
from forward and inverse reinforcement learning, we introduce divergence
minimization objectives for AfD. Analytically, we elucidate the mass-covering
and mode-seeking behaviors of various approaches, explaining when and why
certain methods are superior. Practically, we propose a computationally
efficient algorithm that extrapolates over a tailored reward model for AfD. We
validate our key insights through experiments on the Harmless and Helpful
tasks, demonstrating their strong empirical performance while maintaining
simplicity.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要