InternLM2 Technical Report
CoRR(2024)
摘要
The evolution of Large Language Models (LLMs) like ChatGPT and GPT-4 has
sparked discussions on the advent of Artificial General Intelligence (AGI).
However, replicating such advancements in open-source models has been
challenging. This paper introduces InternLM2, an open-source LLM that
outperforms its predecessors in comprehensive evaluations across 6 dimensions
and 30 benchmarks, long-context modeling, and open-ended subjective evaluations
through innovative pre-training and optimization techniques. The pre-training
process of InternLM2 is meticulously detailed, highlighting the preparation of
diverse data types including text, code, and long-context data. InternLM2
efficiently captures long-term dependencies, initially trained on 4k tokens
before advancing to 32k tokens in pre-training and fine-tuning stages,
exhibiting remarkable performance on the 200k “Needle-in-a-Haystack" test.
InternLM2 is further aligned using Supervised Fine-Tuning (SFT) and a novel
Conditional Online Reinforcement Learning from Human Feedback (COOL RLHF)
strategy that addresses conflicting human preferences and reward hacking. By
releasing InternLM2 models in different training stages and model sizes, we
provide the community with insights into the model's evolution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要