RefuteBench: Evaluating Refuting Instruction-Following for Large Language Models
CoRR(2024)
摘要
The application scope of large language models (LLMs) is increasingly
expanding. In practical use, users might provide feedback based on the model's
output, hoping for a responsive model that can complete responses according to
their feedback. Whether the model can appropriately respond to users' refuting
feedback and consistently follow through with execution has not been thoroughly
analyzed. In light of this, this paper proposes a comprehensive benchmark,
RefuteBench, covering tasks such as question answering, machine translation,
and email writing. The evaluation aims to assess whether models can positively
accept feedback in form of refuting instructions and whether they can
consistently adhere to user demands throughout the conversation. We conduct
evaluations on numerous LLMs and find that LLMs are stubborn, i.e. exhibit
inclination to their internal knowledge, often failing to comply with user
feedback. Additionally, as the length of the conversation increases, models
gradually forget the user's stated feedback and roll back to their own
responses. We further propose a recall-and-repeat prompts as a simple and
effective way to enhance the model's responsiveness to feedback.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要