What's in Your "Safe" Data?: Identifying Benign Data that Breaks Safety
arxiv(2024)
摘要
Current Large Language Models (LLMs), even those tuned for safety and
alignment, are susceptible to jailbreaking. Some have found that just further
fine-tuning an aligned model with benign data (i.e., data without harmful
content) surprisingly leads to substantial degradation in safety. We delve into
the data-centric aspects of why benign fine-tuning inadvertently contributes to
jailbreaking. First, we represent fine-tuning data through two lenses:
representation and gradient spaces. Furthermore, we propose a bi-directional
anchoring method that prioritizes data points that are close to harmful
examples and distant from benign ones. By doing so, our approach effectively
identifies subsets of benign data that are more likely to degrade the model's
safety after fine-tuning. Training on just 100 of these seemingly benign
datapoints can lead to the fine-tuned model affirmatively responding to > 70
of tested harmful requests, compared to < 20
selected data. We further find that selected data are often in the form of
lists and bullet points, or math questions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要