High-quality Data-to-Text Generation for Severely Under-Resourced Languages with Out-of-the-box Large Language Models
Conference of the European Chapter of the Association for Computational Linguistics(2024)
摘要
The performance of NLP methods for severely under-resourced languages cannot
currently hope to match the state of the art in NLP methods for well resourced
languages. We explore the extent to which pretrained large language models
(LLMs) can bridge this gap, via the example of data-to-text generation for
Irish, Welsh, Breton and Maltese. We test LLMs on these under-resourced
languages and English, in a range of scenarios. We find that LLMs easily set
the state of the art for the under-resourced languages by substantial margins,
as measured by both automatic and human evaluations. For all our languages,
human evaluation shows on-a-par performance with humans for our best systems,
but BLEU scores collapse compared to English, casting doubt on the metric's
suitability for evaluating non-task-specific systems. Overall, our results
demonstrate the great potential of LLMs to bridge the performance gap for
under-resourced languages.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要