Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction
arxiv(2024)
摘要
Relation triple extraction, which outputs a set of triples from long
sentences, plays a vital role in knowledge acquisition. Large language models
can accurately extract triples from simple sentences through few-shot learning
or fine-tuning when given appropriate instructions. However, they often miss
out when extracting from complex sentences. In this paper, we design an
evaluation-filtering framework that integrates large language models with small
models for relational triple extraction tasks. The framework includes an
evaluation model that can extract related entity pairs with high precision. We
propose a simple labeling principle and a deep neural network to build the
model, embedding the outputs as prompts into the extraction process of the
large model. We conduct extensive experiments to demonstrate that the proposed
method can assist large language models in obtaining more accurate extraction
results, especially from complex sentences containing multiple relational
triples. Our evaluation model can also be embedded into traditional extraction
models to enhance their extraction precision from complex sentences.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要