GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications
arxiv(2024)
摘要
Large Language Models (LLMs) are evolving beyond their classical role of
providing information within dialogue systems to actively engaging with tools
and performing actions on real-world applications and services. Today, humans
verify the correctness and appropriateness of the LLM-generated outputs (e.g.,
code, functions, or actions) before putting them into real-world execution.
This poses significant challenges as code comprehension is well known to be
notoriously difficult. In this paper, we study how humans can efficiently
collaborate with, delegate to, and supervise autonomous LLMs in the future. We
argue that in many cases, "post-facto validation" - verifying the correctness
of a proposed action after seeing the output - is much easier than the
aforementioned "pre-facto validation" setting. The core concept behind enabling
a post-facto validation system is the integration of an intuitive undo feature,
and establishing a damage confinement for the LLM-generated actions as
effective strategies to mitigate the associated risks. Using this, a human can
now either revert the effect of an LLM-generated output or be confident that
the potential risk is bounded. We believe this is critical to unlock the
potential for LLM agents to interact with applications and services with
limited (post-facto) human involvement. We describe the design and
implementation of our open-source runtime for executing LLM actions, Gorilla
Execution Engine (GoEX), and present open research questions towards realizing
the goal of LLMs and applications interacting with each other with minimal
human supervision. We release GoEX at https://github.com/ShishirPatil/gorilla/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要