The Generative AI Paradox on Evaluation: What It Can Solve, It May Not Evaluate
CoRR(2024)
摘要
This paper explores the assumption that Large Language Models (LLMs) skilled
in generation tasks are equally adept as evaluators. We assess the performance
of three LLMs and one open-source LM in Question-Answering (QA) and evaluation
tasks using the TriviaQA (Joshi et al., 2017) dataset. Results indicate a
significant disparity, with LLMs exhibiting lower performance in evaluation
tasks compared to generation tasks. Intriguingly, we discover instances of
unfaithful evaluation where models accurately evaluate answers in areas where
they lack competence, underscoring the need to examine the faithfulness and
trustworthiness of LLMs as evaluators. This study contributes to the
understanding of "the Generative AI Paradox" (West et al., 2023), highlighting
a need to explore the correlation between generative excellence and evaluation
proficiency, and the necessity to scrutinize the faithfulness aspect in model
evaluations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要