CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity
arxiv(2024)
摘要
State-of-the-art performance in QA tasks is currently achieved by systems
employing Large Language Models (LLMs), however these models tend to
hallucinate information in their responses. One approach focuses on enhancing
the generation process by incorporating attribution from the given input to the
output. However, the challenge of identifying appropriate attributions and
verifying their accuracy against a source is a complex task that requires
significant improvements in assessing such systems. We introduce an
attribution-oriented Chain-of-Thought reasoning method to enhance the accuracy
of attributions. This approach focuses the reasoning process on generating an
attribution-centric output. Evaluations on two context-enhanced
question-answering datasets using GPT-4 demonstrate improved accuracy and
correctness of attributions. In addition, the combination of our method with
finetuning enhances the response and attribution accuracy of two smaller LLMs,
showing their potential to outperform GPT-4 in some cases.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要