Advancing State of the Art in Language Modeling
CoRR(2023)
摘要
Generalization is arguably the most important goal of statistical language
modeling research. Publicly available benchmarks and papers published with an
open-source code have been critical to advancing the field. However, it is
often very difficult, and sometimes even impossible, to reproduce the results
fully as reported in publications. In this paper, we propose a simple framework
that should help advance the state of the art in language modeling in terms of
generalization. We propose to publish not just the code, but also probabilities
on dev and test sets with future publications so that one can easily add the
new model into an ensemble. This has crucial advantages: it is much easier to
determine whether a newly proposed model is actually complementary to the
current baseline. Therefore, instead of inventing new names for the old tricks,
the scientific community can advance faster. Finally, this approach promotes
diversity of ideas: one does not need to create an individual model that is the
new state of the art to attract attention; it will be sufficient to develop a
new model that learns patterns which other models do not. Thus, even a
suboptimal model can be found to have value. Remarkably, our approach has
yielded new state-of-the-art results across various language modeling
benchmarks up to 10%.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要