On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in Summarization
Conference of the European Chapter of the Association for Computational Linguistics(2024)
摘要
Text summarization and simplification are among the most widely used
applications of AI. However, models developed for such tasks are often prone to
hallucination, which can result from training on unaligned data. One efficient
approach to address this issue is Loss Truncation (LT) (Kang and Hashimoto,
2020), an approach to modify the standard log loss to adaptively remove noisy
examples during training. However, we find that LT alone yields a considerable
number of hallucinated entities on various datasets. We study the behavior of
the underlying losses between factual and non-factual examples, to understand
and refine the performance of LT. We demonstrate that LT's performance is
limited when the underlying assumption that noisy targets have higher NLL loss
is not satisfied, and find that word-level NLL among entities provides better
signal for distinguishing factuality. We then leverage this to propose a
fine-grained NLL loss and fine-grained data cleaning strategies, and observe
improvements in hallucination reduction across some datasets. Our work is
available at https://https://github.com/yale-nlp/fine-grained-lt.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要