How well do LLMs cite relevant medical references? An evaluation framework and analyses
CoRR(2024)
摘要
Large language models (LLMs) are currently being used to answer medical
questions across a variety of clinical domains. Recent top-performing
commercial LLMs, in particular, are also capable of citing sources to support
their responses. In this paper, we ask: do the sources that LLMs generate
actually support the claims that they make? To answer this, we propose three
contributions. First, as expert medical annotations are an expensive and
time-consuming bottleneck for scalable evaluation, we demonstrate that GPT-4 is
highly accurate in validating source relevance, agreeing 88
panel of medical doctors. Second, we develop an end-to-end, automated pipeline
called SourceCheckup and use it to evaluate five top-performing LLMs
on a dataset of 1200 generated questions, totaling over 40K pairs of statements
and sources. Interestingly, we find that between 50
are not fully supported by the sources they provide. We also evaluate GPT-4
with retrieval augmented generation (RAG) and find that, even still, around
30% of individual statements are unsupported, while nearly half of its
responses are not fully supported. Third, we open-source our curated dataset of
medical questions and expert annotations for future evaluations. Given the
rapid pace of LLM development and the potential harms of incorrect or outdated
medical information, it is crucial to also understand and quantify their
capability to produce relevant, trustworthy medical references.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要