• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2026, Vol. 48 ›› Issue (1): 162-171.

• 人工智能与数据挖掘 • 上一篇    下一篇

GPR:一种大语言模型增强的方法

高福财,何廷年,杨阳,杨江伟   

  1. (西北师范大学计算机科学与工程学院,甘肃 兰州 730070)
  • 收稿日期:2024-05-24 修回日期:2024-09-18 出版日期:2026-01-25 发布日期:2026-01-25
  • 基金资助:


GPR:A large language model enhancement method

GAO Fucai,HE Tingnian,YANG Yang,YANG Jiangwei   

  1. (College of Computer Science & Engineering,Northwest Normal University,Lanzhou 730070,China) 
  • Received:2024-05-24 Revised:2024-09-18 Online:2026-01-25 Published:2026-01-25

摘要: 大型语言模型(LLMs)通过大量的数据习得各种能力和知识,但仍有着幻觉、专业领域知识不足等问题,这些问题可通过引入外部知识图谱来缓解。为从知识图谱中获取知识,提出了一种新方法GPR,通过广度优先搜索(BFS)检索相关关系及实体,并以全局视角修剪提取高度相关的关系及实体。同时将问题中的实体通过最短路径进行关系连接。将关系及实体转化为提示词推送到LLMs,引导LLMs推理生成答案并文字化展示推理过程,使得决策透明且可追溯。在多个数据集上的实验结果表明GPR有着更好的推理优势,检索到的知识可更好缓解LLMs的幻觉及知识不足问题。

关键词: 知识图谱, 大语言模型, 协同增强, 信息检索

Abstract: Large language models (LLMs) acquire various abilities and knowledge through a large amount of data, but still have problems such as illusions and lack of specialized domain knowledge, which can be mitigated by introducing an external knowledge graph. A new method called global pruning retrieval (GPR) is proposed for knowledge acquisition from knowledge graphs, which retrieves relevant relations and entities through breadth first search (BFS) and prunes to extract highly relevant relations and entities with a global perspective. At the same time, the entities in the question are connected by the shortest path to the relations. The relations and entities are transformed into prompt and pushed to LLMs, which guide LLMs to reason and generate answers and textualize the reasoning process, making the decision transparent and traceable. Experimental results on multiple datasets show that GPR has a better reasoning advantage, and the retrieved knowledge can better alleviate the illusion and domain knowledge deficit problems of LLMs.


Key words: knowledge graph, large language model, synergistic enhancement, information retrieval