Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models

CoRR(2023)

引用 1|浏览137
暂无评分
摘要
This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance. We explore how various training data factors, such as quantity, quality, and linguistic distribution, influence the performance of instruction-tuned models trained on publicly accessible high-quality instruction datasets for both English and Chinese languages. Our goal is to supplement evaluation with quantitative analyses, providing valuable insights for the continued advancement of open-source chat models. Our model, data, and code are publicly available for others to use and build upon.
更多
查看译文
关键词
language,training data,models,open-sourced,instruction-following
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要