A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains
CoRR(2024)
摘要
Prompting language models to provide step-by-step answers (e.g.,
"Chain-of-Thought") is the prominent approach for complex reasoning tasks,
where more accurate reasoning chains typically improve downstream task
performance. Recent literature discusses automatic methods to verify reasoning
steps to evaluate and improve their correctness. However, no fine-grained
step-level datasets are available to enable thorough evaluation of such
verification methods, hindering progress in this direction. We introduce
Reveal: Reasoning Verification Evaluation, a new dataset to benchmark automatic
verifiers of complex Chain-of-Thought reasoning in open-domain question
answering settings. Reveal includes comprehensive labels for the relevance,
attribution to evidence passages, and logical correctness of each reasoning
step in a language model's answer, across a wide variety of datasets and
state-of-the-art language models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要