Attribute Structuring Improves LLM-Based Evaluation of Clinical Text Summaries
arxiv(2024)
摘要
Summarizing clinical text is crucial in health decision-support and clinical
research. Large language models (LLMs) have shown the potential to generate
accurate clinical text summaries, but still struggle with issues regarding
grounding and evaluation, especially in safety-critical domains such as health.
Holistically evaluating text summaries is challenging because they may contain
unsubstantiated information. Here, we explore a general mitigation framework
using Attribute Structuring (AS), which structures the summary evaluation
process. It decomposes the evaluation process into a grounded procedure that
uses an LLM for relatively simple structuring and scoring tasks, rather than
the full task of holistic summary evaluation. Experiments show that AS
consistently improves the correspondence between human annotations and
automated metrics in clinical text summarization. Additionally, AS yields
interpretations in the form of a short text span corresponding to each output,
which enables efficient human auditing, paving the way towards trustworthy
evaluation of clinical information in resource-constrained scenarios. We
release our code, prompts, and an open-source benchmark at
https://github.com/microsoft/attribute-structuring.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要