GenKP: generative knowledge prompts for enhancing large language models

被引:0
作者
Li, Xinbai [1 ]
Peng, Shaowen [1 ]
Yada, Shuntaro [1 ,2 ]
Wakamiya, Shoko [1 ]
Aramaki, Eiji [1 ]
机构
[1] Nara Inst Sci & Technol, 8916-5 Takayam cho, Ikoma, Nara 6300192, Japan
[2] Univ Tsukuba, Tsukuba, Ibaraki, Japan
关键词
Large language models; Knowledge graph; Knowledge prompts; In-context learning;
D O I
10.1007/s10489-025-06318-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated extensive capabilities across various natural language processing (NLP) tasks. Knowledge graphs (KGs) harbor vast amounts of facts, furnishing external knowledge for language models. The structured knowledge extracted from KGs must undergo conversion into sentences to align with the input format required by LLMs. Previous research has commonly utilized methods such as triple conversion and template-based conversion. However, sentences converted using existing methods frequently encounter issues such as semantic incoherence, ambiguity, and unnaturalness, which distort the original intent, and deviate the sentences from the facts. Meanwhile, despite the improvement that knowledge-enhanced pre-training and prompt-tuning methods have achieved in small-scale models, they are difficult to implement for LLMs in the absence of computational resources. The advanced comprehension of LLMs facilitates in-context learning (ICL), thereby enhancing their performance without the need for additional training. In this paper, we propose a knowledge prompts generation method, GenKP, which injects knowledge into LLMs by ICL. Compared to inserting triple-conversion or templated-conversion knowledge without selection, GenKP entails generating knowledge samples using LLMs in conjunction with KGs and makes a trade-off of knowledge samples through weighted verification and BM25 ranking, reducing knowledge noise. Experimental results illustrate that incorporating knowledge prompts enhances the performance of LLMs. Furthermore, LLMs augmented with GenKP exhibit superior improvements compared to the methods utilizing triple and template-based knowledge injection.
引用
收藏
页数:15
相关论文
共 50 条
[31]   Knowledge management in organization and the large language models [J].
Zelenkov, Yu. A. .
ROSSIISKII ZHURNAL MENEDZHMENTA-RUSSIAN MANAGEMENT JOURNAL, 2024, 22 (03) :573-601
[32]   Evaluating Intelligence and Knowledge in Large Language Models [J].
Bianchini, Francesco .
TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2025, 44 (01) :163-173
[33]   Knowledge Editing for Large Language Models: A Survey [J].
Wang, Song ;
Zhu, Yaochen ;
Liu, Haochen ;
Zheng, Zaiyi ;
Chen, Chen ;
Li, Jundong .
ACM COMPUTING SURVEYS, 2025, 57 (03)
[34]   Enhancing Orthopedic Knowledge Assessments: The Performance of Specialized Generative Language Model Optimization [J].
Zhou, Hong ;
Wang, Hong-lin ;
Duan, Yu-yu ;
Yan, Zi-neng ;
Luo, Rui ;
Lv, Xiang-xin ;
Xie, Yi ;
Zhang, Jia-yao ;
Yang, Jia-ming ;
Xue, Ming-di ;
Fang, Ying ;
Lu, Lin ;
Liu, Peng-ran ;
Ye, Zhe-wei .
CURRENT MEDICAL SCIENCE, 2024, :1001-1005
[35]   Applicability of large language models and generative models for legal case judgement summarization [J].
Deroy, Aniket ;
Ghosh, Kripabandhu ;
Ghosh, Saptarshi .
ARTIFICIAL INTELLIGENCE AND LAW, 2024,
[36]   From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models [J].
Wen, Xumeng ;
Zhang, Han ;
Zheng, Shun ;
Xu, Wei ;
Bian, Jiang .
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024, 2024, :3323-3333
[37]   Quo Vadis ChatGPT? From large language models to Large Knowledge Models [J].
Venkatasubramanian, Venkat ;
Chakraborty, Arijit .
COMPUTERS & CHEMICAL ENGINEERING, 2025, 192
[38]   Enhancing text-based knowledge graph completion with zero-shot large language models: A focus on semantic enhancement [J].
Yang, Rui ;
Zhu, Jiahao ;
Man, Jianping ;
Fang, Li ;
Zhou, Yi .
KNOWLEDGE-BASED SYSTEMS, 2024, 300
[39]   Enhancing Natural Language Instruction Document Comprehension with Large Language Models [J].
Li, Shang ;
Chen, Yang ;
Zhang, Xin .
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER, BIG DATA AND ARTIFICIAL INTELLIGENCE, ICCBD+AI, 2024, :622-626
[40]   Enhancing Biomedical Question Answering with Large Language Models [J].
Yang, Hua ;
Li, Shilong ;
Goncalves, Teresa .
INFORMATION, 2024, 15 (08)