Counter-intuitive: Large Language Models Can Better Understand Knowledge Graphs Than We Thought
arxiv(2024)
摘要
Although the method of enhancing large language models' (LLMs') reasoning
ability and reducing their hallucinations through the use of knowledge graphs
(KGs) has received widespread attention, the exploration of how to enable LLMs
to integrate the structured knowledge in KGs on-the-fly remains inadequate.
Researchers often co-train KG embeddings and LLM parameters to equip LLMs with
the ability of comprehending KG knowledge. However, this resource-hungry
training paradigm significantly increases the model learning cost and is also
unsuitable for non-open-source, black-box LLMs. In this paper, we employ
complex question answering (CQA) as a task to assess the LLM's ability of
comprehending KG knowledge. We conducted a comprehensive comparison of KG
knowledge injection methods (from triples to natural language text), aiming to
explore the optimal prompting method for supplying KG knowledge to LLMs,
thereby enhancing their comprehension of KG. Contrary to our initial
expectations, our analysis revealed that LLMs effectively handle messy, noisy,
and linearized KG knowledge, outperforming methods that employ well-designed
natural language (NL) textual prompts. This counter-intuitive finding provides
substantial insights for future research on LLMs' comprehension of structured
knowledge.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要