Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding
CoRR(2024)
摘要
Large Language Models (LLMs) have demonstrated good performance in many
reasoning tasks, but they still struggle with some complicated reasoning tasks
including logical reasoning. One non-negligible reason for LLMs' suboptimal
performance on logical reasoning is their overlooking of understanding logical
fallacies correctly. To evaluate LLMs' capability of logical fallacy
understanding (LFU), we propose five concrete tasks from three cognitive
dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we
have successfully constructed a new dataset LFUD based on GPT-4 accompanied by
a little human effort. Our extensive experiments justify that our LFUD can be
used not only to evaluate LLMs' LFU capability, but also to fine-tune LLMs to
obtain significantly enhanced performance on logical reasoning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要