Knowledge Graphs and Their Reciprocal Relationship with Large Language Models

被引:0
作者
Dehal, Ramandeep Singh [1 ]
Sharma, Mehak [1 ]
Rajabi, Enayat [1 ]
机构
[1] Cape Breton Univ, Management Sci Dept, Sydney, NS B1M 1A2, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Knowledge Graphs; Large Language Models; machine learning; artificial intelligence;
D O I
10.3390/make7020038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The reciprocal relationship between Large Language Models (LLMs) and Knowledge Graphs (KGs) highlights their synergistic potential in enhancing artificial intelligence (AI) applications. LLMs, with their natural language understanding and generative capabilities, support the automation of KG construction through entity recognition, relation extraction, and schema generation. Conversely, KGs serve as structured and interpretable data sources that improve the transparency, factual consistency and reliability of LLM-based applications, mitigating challenges such as hallucinations and lack of explainability. This study conducts a systematic literature review of 77 studies to examine AI methodologies supporting LLM-KG integration, including symbolic AI, machine learning, and hybrid approaches. The research explores diverse applications spanning healthcare, finance, justice, and industrial automation, revealing the transformative potential of this synergy. Through in-depth analysis, this study identifies key limitations in current approaches, including challenges in scalability with maintaining dynamic and real-time Knowledge Graphs, difficulty in adapting general-purpose LLMs to specialized domains, limited explainability in tracing model outputs to interpretable reasoning, and ethical concerns surrounding bias, fairness, and transparency. In response, the study highlights potential strategies to optimize LLM-KG synergy. The findings from this study provide actionable insights for researchers and practitioners aiming for robust, transparent, and adaptive AI systems to enhance knowledge-driven AI applications through LLM-KG integration, further advancing generative AI and explainable AI (XAI) applications.
引用
收藏
页数:26
相关论文
共 104 条
[91]  
Ventura de los Ojos X., 2024, Application of LLM-Augmented Knowledge Graphs for Wirearchy Management
[92]   Representing the Interaction between Users and Products via LLM-assisted Knowledge Graph Construction [J].
Vizcarra, Julio ;
Haruta, Shuichiro ;
Kurokawa, Mori .
18TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC 2024, 2024, :231-232
[93]  
Wang F., 2024, LLM-KGMQA: Large Language Model-Augmented Multi-Hop Question-Answering System Based on Knowledge Graph in Medical Field, DOI [10.21203/rs.3.rs-4721418/v1, DOI 10.21203/RS.3.RS-4721418/V1]
[94]   LLMRec: Large Language Models with Graph Augmentation for Recommendation [J].
Wei, Wei ;
Ren, Xubin ;
Tang, Jiabin ;
Wang, Qinyong ;
Su, Lixin ;
Cheng, Suqi ;
Wang, Junfeng ;
Yin, Dawei ;
Huang, Chao .
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, :806-815
[95]  
Wu L.I., 2023, P 2023 IEEE INT C ME, P278, DOI [10.1109/MedAI59581.2023.00043, DOI 10.1109/MEDAI59581.2023.00043]
[96]   Zero-Shot Construction of Chinese Medical Knowledge Graph with GPT-3.5-Turbo and GPT-4 [J].
Wu, Ling-i ;
Su, Yuxin ;
Li, Guoqiang .
ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2025, 16 (02)
[97]  
Wu Qinglin, 2023, 2023 16th International Symposium on Computational Intelligence and Design (ISCID), P161, DOI 10.1109/ISCID59865.2023.00045
[98]   ChatTf: A Knowledge Graph-Enhanced Intelligent Q&A System for Mitigating Factuality Hallucinations in Traditional Folklore [J].
Xu, Jun ;
Zhang, Hao ;
Zhang, Haijing ;
Lu, Jiawei ;
Xiao, Gang .
IEEE ACCESS, 2024, 12 :162638-162650
[99]   Enhancing Retrieval-Augmented Generation Models with Knowledge Graphs: Innovative Practices Through a Dual-Pathway Approach [J].
Xu, Sheng ;
Chen, Mike ;
Chen, Shuwen .
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14880 :398-409
[100]  
2024, Social Medicine and Health Management, V5, DOI [10.23977/socmhm.2024.050208, 10.23977/socmhm.2024.050208, DOI 10.23977/SOCMHM.2024.050208]