Your Co-Workers Matter: Evaluating Collaborative Capabilities of Language Models in Blocks World
arxiv(2024)
摘要
Language agents that interact with the world on their own have great
potential for automating digital tasks. While large language model (LLM) agents
have made progress in understanding and executing tasks such as textual games
and webpage control, many real-world tasks also require collaboration with
humans or other LLMs in equal roles, which involves intent understanding, task
coordination, and communication. To test LLM's ability to collaborate, we
design a blocks-world environment, where two agents, each having unique goals
and skills, build a target structure together. To complete the goals, they can
act in the world and communicate in natural language. Under this environment,
we design increasingly challenging settings to evaluate different collaboration
perspectives, from independent to more complex, dependent tasks. We further
adopt chain-of-thought prompts that include intermediate reasoning steps to
model the partner's state and identify and correct execution errors. Both
human-machine and machine-machine experiments show that LLM agents have strong
grounding capacities, and our approach significantly improves the evaluation
metric.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要