Language Model Evolution: An Iterated Learning Perspective
arxiv(2024)
摘要
With the widespread adoption of Large Language Models (LLMs), the prevalence
of iterative interactions among these models is anticipated to increase.
Notably, recent advancements in multi-round self-improving methods allow LLMs
to generate new examples for training subsequent models. At the same time,
multi-agent LLM systems, involving automated interactions among agents, are
also increasing in prominence. Thus, in both short and long terms, LLMs may
actively engage in an evolutionary process. We draw parallels between the
behavior of LLMs and the evolution of human culture, as the latter has been
extensively studied by cognitive scientists for decades. Our approach involves
leveraging Iterated Learning (IL), a Bayesian framework that elucidates how
subtle biases are magnified during human cultural evolution, to explain some
behaviors of LLMs. This paper outlines key characteristics of agents' behavior
in the Bayesian-IL framework, including predictions that are supported by
experimental verification with various LLMs. This theoretical framework could
help to more effectively predict and guide the evolution of LLMs in desired
directions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要