Do Large Language Models Understand Conversational Implicature – A case study with a chinese sitcom
CoRR(2024)
摘要
Understanding the non-literal meaning of an utterance is critical for large
language models (LLMs) to become human-like social communicators. In this work,
we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset
aimed at conversational implicature, sourced from dialogues in the Chinese
sitcom My Own Swordsman. It includes 200 carefully handcrafted
questions, all annotated on which Gricean maxims have been violated. We test
eight close-source and open-source LLMs under two tasks: a multiple-choice
question task and an implicature explanation task. Our results show that GPT-4
attains human-level accuracy (94
demonstrates a 78.5
and several open-source models, demonstrate a lower accuracy ranging from 20
to 60
explanation of the implicatures generated by LLMs on their reasonability, logic
and fluency. While all models generate largely fluent and self-consistent text,
their explanations score low on reasonability except for GPT-4, suggesting that
most LLMs cannot produce satisfactory explanations of the implicatures in the
conversation. Moreover, we find LLMs' performance does not vary significantly
by Gricean maxims, suggesting that LLMs do not seem to process implicatures
derived from different maxims differently. Our data and code are available at
https://github.com/sjtu-compling/llm-pragmatics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要