ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs
CoRR(2024)
摘要
Safety is critical to the usage of large language models (LLMs). Multiple
techniques such as data filtering and supervised fine-tuning have been
developed to strengthen LLM safety. However, currently known techniques presume
that corpora used for safety alignment of LLMs are solely interpreted by
semantics. This assumption, however, does not hold in real-world applications,
which leads to severe vulnerabilities in LLMs. For example, users of forums
often use ASCII art, a form of text-based art, to convey image information. In
this paper, we propose a novel ASCII art-based jailbreak attack and introduce a
comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the
capabilities of LLMs in recognizing prompts that cannot be solely interpreted
by semantics. We show that five SOTA LLMs (GPT-3.5, GPT-4, Gemini, Claude, and
Llama2) struggle to recognize prompts provided in the form of ASCII art. Based
on this observation, we develop the jailbreak attack ArtPrompt, which leverages
the poor performance of LLMs in recognizing ASCII art to bypass safety measures
and elicit undesired behaviors from LLMs. ArtPrompt only requires black-box
access to the victim LLMs, making it a practical attack. We evaluate ArtPrompt
on five SOTA LLMs, and show that ArtPrompt can effectively and efficiently
induce undesired behaviors from all five LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要