GenKP: generative knowledge prompts for enhancing large language models

被引:0
|
作者
Li, Xinbai [1 ]
Peng, Shaowen [1 ]
Yada, Shuntaro [1 ,2 ]
Wakamiya, Shoko [1 ]
Aramaki, Eiji [1 ]
机构
[1] Nara Inst Sci & Technol, 8916-5 Takayam cho, Ikoma, Nara 6300192, Japan
[2] Univ Tsukuba, Tsukuba, Ibaraki, Japan
关键词
Large language models; Knowledge graph; Knowledge prompts; In-context learning;
D O I
10.1007/s10489-025-06318-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated extensive capabilities across various natural language processing (NLP) tasks. Knowledge graphs (KGs) harbor vast amounts of facts, furnishing external knowledge for language models. The structured knowledge extracted from KGs must undergo conversion into sentences to align with the input format required by LLMs. Previous research has commonly utilized methods such as triple conversion and template-based conversion. However, sentences converted using existing methods frequently encounter issues such as semantic incoherence, ambiguity, and unnaturalness, which distort the original intent, and deviate the sentences from the facts. Meanwhile, despite the improvement that knowledge-enhanced pre-training and prompt-tuning methods have achieved in small-scale models, they are difficult to implement for LLMs in the absence of computational resources. The advanced comprehension of LLMs facilitates in-context learning (ICL), thereby enhancing their performance without the need for additional training. In this paper, we propose a knowledge prompts generation method, GenKP, which injects knowledge into LLMs by ICL. Compared to inserting triple-conversion or templated-conversion knowledge without selection, GenKP entails generating knowledge samples using LLMs in conjunction with KGs and makes a trade-off of knowledge samples through weighted verification and BM25 ranking, reducing knowledge noise. Experimental results illustrate that incorporating knowledge prompts enhances the performance of LLMs. Furthermore, LLMs augmented with GenKP exhibit superior improvements compared to the methods utilizing triple and template-based knowledge injection.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] GenKP: generative knowledge prompts for enhancing large language modelsGenKP: generative knowledge prompts for enhancing large language modelsX. Li et al.
    Xinbai Li
    Shaowen Peng
    Shuntaro Yada
    Shoko Wakamiya
    Eiji Aramaki
    Applied Intelligence, 2025, 55 (7)
  • [3] Generative Multi-Modal Knowledge Retrieval with Large Language Models
    Long, Xinwei
    Zeng, Jiali
    Meng, Fandong
    Ma, Zhiyuan
    Zhang, Kaiyan
    Zhou, Bowen
    Zhou, Jie
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18733 - 18741
  • [4] Enhancing Large Language Models Through External Domain Knowledge
    Welz, Laslo
    Lanquillon, Carsten
    ARTIFICIAL INTELLIGENCE IN HCI, PT III, AI-HCI 2024, 2024, 14736 : 135 - 146
  • [5] Better Together: Enhancing Generative Knowledge Graph Completion with Language Models and Neighborhood Information
    Chepurova, Alla
    Bulatov, Aydar
    Kuratov, Yuri
    Burtsev, Mikhail
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 5306 - 5316
  • [6] Generative Large Language Models Explained
    Yan, Xueming
    Xiao, Yan
    Jin, Yaochu
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2024, 19 (04) : 45 - 46
  • [7] How to write effective prompts for large language models
    Lin, Zhicheng
    NATURE HUMAN BEHAVIOUR, 2024, 8 (4) : 611 - 615
  • [8] How to write effective prompts for large language models
    Zhicheng Lin
    Nature Human Behaviour, 2024, 8 : 611 - 615
  • [9] Efficient Detection of Toxic Prompts in Large Language Models
    Liu, Yi
    Yu, Junzhe
    Sun, Huijia
    Shi, Ling
    Deng, Gelei
    Chen, Yuqi
    Liu, Yang
    arXiv, 1600,
  • [10] Foundation Models, Generative AI, and Large Language Models
    Ross, Angela
    McGrow, Kathleen
    Zhi, Degui
    Rasmy, Laila
    CIN-COMPUTERS INFORMATICS NURSING, 2024, 42 (05) : 377 - 387