谷歌浏览器插件
订阅小程序
在清言上使用

CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models

Findings of the Association for Computational Linguistics ACL 2024(2024)

引用 0|浏览54
暂无评分
摘要
The advancement of large language models (LLMs) has enhanced the ability togeneralize across a wide range of unseen natural language processing (NLP)tasks through instruction-following. Yet, their effectiveness often diminishesin low-resource languages like Chinese, exacerbated by biased evaluations fromdata leakage, casting doubt on their true generalizability to new linguisticterritories. In response, we introduce the Chinese Instruction-FollowingBenchmark (CIF-Bench), designed to evaluate the zero-shot generalizability ofLLMs to the Chinese language. CIF-Bench comprises 150 tasks and 15,000input-output pairs, developed by native speakers to test complex reasoning andChinese cultural nuances across 20 categories. To mitigate data contamination,we release only half of the dataset publicly, with the remainder kept private,and introduce diversified instructions to minimize score variance, totaling45,000 data instances. Our evaluation of 28 selected LLMs reveals a noticeableperformance gap, with the best model scoring only 52.9limitations of LLMs in less familiar language and task contexts. This work notonly uncovers the current limitations of LLMs in handling Chinese languagetasks but also sets a new standard for future LLM generalizability research,pushing towards the development of more adaptable, culturally informed, andlinguistically diverse models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要