Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection

NAACL-HLT(2024)

引用 0|浏览129
暂无评分
摘要
Instruction-tuned Large Language Models (LLMs) have become a ubiquitousplatform for open-ended applications due to their ability to modulate responsesbased on human instructions. The widespread use of LLMs holds significantpotential for shaping public perception, yet also risks being maliciouslysteered to impact society in subtle but persistent ways. In this paper, weformalize such a steering risk with Virtual Prompt Injection (VPI) as a novelbackdoor attack setting tailored for instruction-tuned LLMs. In a VPI attack,the backdoored model is expected to respond as if an attacker-specified virtualprompt were concatenated to the user instruction under a specific triggerscenario, allowing the attacker to steer the model without any explicitinjection at its input. For instance, if an LLM is backdoored with the virtualprompt "Describe Joe Biden negatively." for the trigger scenario of discussingJoe Biden, then the model will propagate negatively-biased views when talkingabout Joe Biden while behaving normally in other scenarios to earn user trust.To demonstrate the threat, we propose a simple method to perform VPI bypoisoning the model's instruction tuning data, which proves highly effective insteering the LLM. For example, by poisoning only 52 instruction tuning examples(0.1the trained model on Joe Biden-related queries changes from 0highlights the necessity of ensuring the integrity of the instruction tuningdata. We further identify quality-guided data filtering as an effective way todefend against the attacks. Our project page is available athttps://poison-llm.github.io.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要