INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
CoRR(2024)
摘要
Despite the critical need to align search targets with users' intention,
retrievers often only prioritize query information without delving into the
users' intended search context. Enhancing the capability of retrievers to
understand intentions and preferences of users, akin to language model
instructions, has the potential to yield more aligned search targets. Prior
studies restrict the application of instructions in information retrieval to a
task description format, neglecting the broader context of diverse and evolving
search scenarios. Furthermore, the prevailing benchmarks utilized for
evaluation lack explicit tailoring to assess instruction-following ability,
thereby hindering progress in this field. In response to these limitations, we
propose a novel benchmark,INSTRUCTIR, specifically designed to evaluate
instruction-following ability in information retrieval tasks. Our approach
focuses on user-aligned instructions tailored to each query instance,
reflecting the diverse characteristics inherent in real-world search scenarios.
Through experimental analysis, we observe that retrievers fine-tuned to follow
task-style instructions, such as INSTRUCTOR, can underperform compared to their
non-instruction-tuned counterparts. This underscores potential overfitting
issues inherent in constructing retrievers trained on existing
instruction-aware retrieval datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要