Configurable Safety Tuning of Language Models with Synthetic Preference Data
CoRR(2024)
摘要
State-of-the-art language model fine-tuning techniques, such as Direct
Preference Optimization (DPO), restrict user control by hard-coding predefined
behaviors into the model. To address this, we propose a novel method,
Configurable Safety Tuning (CST), that augments DPO using synthetic preference
data to facilitate flexible safety configuration of LLMs at inference time. CST
overcomes the constraints of vanilla DPO by introducing a system prompt
specifying safety configurations, enabling LLM deployers to disable/enable
safety preferences based on their need, just changing the system prompt. Our
experimental evaluations indicate that CST successfully manages different
safety configurations and retains the original functionality of LLMs, showing
it is a robust method for configurable deployment. Data and models available at
https://github.com/vicgalle/configurable-safety-tuning
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要