A survey on augmenting knowledge graphs (KGs) with large language models (LLMs): models, evaluation metrics, benchmarks, and challenges

被引:1
作者
Ibrahim, Nourhan [1 ,2 ]
Aboulela, Samar [1 ]
Ibrahim, Ahmed [3 ]
Kashef, Rasha [1 ]
机构
[1] Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON
[2] Faculty of Engineering, Alexandria University, Alexandria
[3] Computer Science, Western University, London, ON
来源
Discover Artificial Intelligence | 2024年 / 4卷 / 01期
关键词
Deep learning (DL); Evaluation metrics; Knowledge graphs (KGs); Large language models (LLMs); Retrieval augmentation generation (RAG);
D O I
10.1007/s44163-024-00175-8
中图分类号
学科分类号
摘要
Integrating Large Language Models (LLMs) with Knowledge Graphs (KGs) enhances the interpretability and performance of AI systems. This research comprehensively analyzes this integration, classifying approaches into three fundamental paradigms: KG-augmented LLMs, LLM-augmented KGs, and synergized frameworks. The evaluation examines each paradigm’s methodology, strengths, drawbacks, and practical applications in real-life scenarios. The findings highlight the substantial impact of these integrations in fundamentally improving real-time data analysis, efficient decision-making, and promoting innovation across various domains. In this paper, we also describe essential evaluation metrics and benchmarks for assessing the performance of these integrations, addressing challenges like scalability and computational overhead, and providing potential solutions. This comprehensive analysis underscores the profound impact of these integrations on improving real-time data analysis, enhancing decision-making efficiency, and fostering innovation across various domains. © The Author(s) 2024.
引用
收藏
相关论文
共 92 条
[81]  
Xu L., Hu H., Zhang X., Li L., Cao C., Li Y., Xu Y., Sun K., Yu D., Yu C., Clue: A chinese language understanding evaluation benchmark., (2020)
[82]  
Yu W., Jiang Z., Dong Y., Feng J., Reclor: A Reading Comprehension Dataset Requiring Logical Reasoning., (2020)
[83]  
Petroni F., Rocktaschel T., Lewis P., Bakhtin A., Wu Y., Miller A.H., Riedel S., Language Models as Knowledge Bases?, (2019)
[84]  
Elsahar H., Vougiouklis P., Remaci A., Gravier C., Hare J., Laforest F., Simperl E., T-rex: A large scale alignment of natural language with knowledge base triples, . Proceedings of LREC, (2018)
[85]  
Carlson A., Betteridge J., Kisiel B., Settles B., Hruschka E.R., Mitchell T.M., Toward an architecture for never-ending language learning, Proceedings of AAAI, (2010)
[86]  
Berant J., Liang P., Semantic Parsing via Paraphrasing, pp. 1415-1425, (2014)
[87]  
Talmor A., Berant J., The web as a knowledge-base for answering complex questions.
[88]  
Zhang Y., Dai H., Kozareva Z., Smola A., Song L., Variational reasoning for question answering with knowledge graph, In: Proceedings of the AAAI Conference on Artificial Intelligence, (2018)
[89]  
Jiang L., Usbeck R., Knowledge graph question answering datasets and their generalizability: Are they enough for future research?, In: Proceedings of the 45Th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2022)
[90]  
Bordes A., Usunier N., Chopra S., Weston J., Large-scale simple question answering with memory networks.