Small Language Models are Good Too: An Empirical Study of Zero-Shot Classification
arxiv(2024)
摘要
This study is part of the debate on the efficiency of large versus small
language models for text classification by prompting.We assess the performance
of small language models in zero-shot text classification, challenging the
prevailing dominance of large models.Across 15 datasets, our investigation
benchmarks language models from 77M to 40B parameters using different
architectures and scoring functions. Our findings reveal that small models can
effectively classify texts, getting on par with or surpassing their larger
counterparts.We developed and shared a comprehensive open-source repository
that encapsulates our methodologies. This research underscores the notion that
bigger isn't always better, suggesting that resource-efficient small models may
offer viable solutions for specific data classification challenges.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要