Escalation Risks from Language Models in Military and Diplomatic Decision-Making
CoRR(2024)
摘要
Governments are increasingly considering integrating autonomous AI agents in
high-stakes military and foreign-policy decision-making, especially with the
emergence of advanced generative AI models like GPT-4. Our work aims to
scrutinize the behavior of multiple AI agents in simulated wargames,
specifically focusing on their predilection to take escalatory actions that may
exacerbate multilateral conflicts. Drawing on political science and
international relations literature about escalation dynamics, we design a novel
wargame simulation and scoring framework to assess the escalation risks of
actions taken by these agents in different scenarios. Contrary to prior
studies, our research provides both qualitative and quantitative insights and
focuses on large language models (LLMs). We find that all five studied
off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation
patterns. We observe that models tend to develop arms-race dynamics, leading to
greater conflict, and in rare cases, even to the deployment of nuclear weapons.
Qualitatively, we also collect the models' reported reasonings for chosen
actions and observe worrying justifications based on deterrence and
first-strike tactics. Given the high stakes of military and foreign-policy
contexts, we recommend further examination and cautious consideration before
deploying autonomous language model agents for strategic military or diplomatic
decision-making.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要