Language Generation in the Limit
arxiv(2024)
摘要
Although current large language models are complex, the most basic
specifications of the underlying language generation problem itself are simple
to state: given a finite set of training samples from an unknown language,
produce valid new strings from the language that don't already appear in the
training data. Here we ask what we can conclude about language generation using
only this specification, without further assumptions. In particular, suppose
that an adversary enumerates the strings of an unknown target language L that
is known only to come from one of a possibly infinite list of candidates. A
computational agent is trying to learn to generate from this language; we say
that the agent generates from L in the limit if after some finite point in the
enumeration of L, the agent is able to produce new elements that come
exclusively from L and that have not yet been presented by the adversary. Our
main result is that there is an agent that is able to generate in the limit for
every countable list of candidate languages. This contrasts dramatically with
negative results due to Gold and Angluin in a well-studied model of language
learning where the goal is to identify an unknown language from samples; the
difference between these results suggests that identifying a language is a
fundamentally different problem than generating from it.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要