Give us the Facts: Enhancing Large Language Models With Knowledge Graphs for Fact-Aware Language Modeling

被引:34
作者
Yang, Linyao [1 ]
Chen, Hongyang [1 ]
Li, Zhao [1 ]
Ding, Xiao [2 ]
Wu, Xindong [1 ]
机构
[1] Zhejiang Lab, Hangzhou 311121, Peoples R China
[2] Harbin Inst Technol, Res Ctr Social Comp & Informat Retrieval, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Large language model; knowledge graph; ChatGPT; knowledge reasoning; knowledge management;
D O I
10.1109/TKDE.2024.3360454
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, ChatGPT, a representative large language model (LLM), has gained considerable attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible alternative to structured knowledge bases like knowledge graphs (KGs). However, while LLMs are proficient at learning probabilistic language patterns and engaging in conversations with humans, they, like previous smaller pre-trained language models (PLMs), still have difficulty in recalling facts while generating knowledge-grounded contents. To overcome these limitations, researchers have proposed enhancing data-driven PLMs with knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus improving their performance in generating texts requiring factual knowledge and providing more informed responses to user queries. This paper reviews the studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced pre-trained language models (KGPLMs) as well as their applications. Inspired by existing studies on KGPLM, this paper proposes enhancing LLMs with KGs by developing knowledge graph-enhanced large language models (KGLLMs). KGLLM provides a solution to enhance LLMs' factual reasoning ability, opening up new avenues for LLM research.
引用
收藏
页码:3091 / 3110
页数:20
相关论文
共 153 条
[1]  
AlKhamissi Badr, 2022, arXiv, DOI DOI 10.48550/ARXIV.2204.06031
[2]  
Andrus BR, 2022, AAAI CONF ARTIF INTE, P10436
[3]  
Baek J., 2023, P 1 WORKSHOP NATURAL, P78
[4]  
Bang Y, 2023, Arxiv, DOI [arXiv:2302.04023, DOI 10.48550/ARXIV.2302.04023]
[5]  
Bi Z, 2023, Arxiv, DOI arXiv:2308.15452
[6]  
Bian N., 2023, arXiv
[7]  
Bian N, 2021, AAAI CONF ARTIF INTE, V35, P12574
[8]  
Bordes A., 2013, Advances in Neural Information Processing Systems
[9]  
Bosselut A, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P4762
[10]  
Brown TB, 2020, ADV NEUR IN, V33