Large Models and Multimodal: A Survey of Cutting-Edge Approaches to Knowledge Graph Completion

被引:0
作者
Wu, Minxin [1 ]
Gong, Yufei [1 ]
Lu, Heping [2 ]
Li, Baofeng [2 ]
Wang, Kai [1 ]
Zhou, Yanquan [1 ]
Li, Lei [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing 100876, Peoples R China
[2] China Elect Power Res Inst, Beijing 100192, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024 | 2024年 / 14878卷
关键词
Knowledge Graph Completion; Representation Learning; Large Language Models; Multimodal; Pre-training Models;
D O I
10.1007/978-981-97-5672-8_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The critical task of knowledge graph completion (KGC) cannot be overlooked when it comes to the evolution and application of new-generation knowledge graphs. With the advancement of multimodal learning and the rise of large models, knowledge graphs have experienced unprecedented development. Researchers have significantly enhanced KGC by integrating these emerging technologies with knowledge graphs. Within this context, this paper systematically reviews the evolutionary process of KGC methods, ranging from traditional representation learning approaches to those based on pre-training models, large language models (LLMs), and multimodal techniques. Specifically, we outline the application and efficacy of these emerging methods in addressing KGC problems, emphasizing their strengths and limitations in understanding knowledge associations and handling complex semantic information. Finally, we outline future developmental directions, aiming to portray a comprehensive and in-depth perspective of KGC from the latest vantage point for researchers, offering valuable insights for future research and applications.
引用
收藏
页码:163 / 174
页数:12
相关论文
共 20 条
[1]  
[Anonymous], 2012, Introducing the knowledge graph: things, not strings
[2]   TransGCN:Coupling Transformation Assumptions with Graph Convolutional Networks for Link Prediction [J].
Cai, Ling ;
Yan, Bo ;
Mai, Gengchen ;
Janowicz, Krzysztof ;
Zhu, Rui .
PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON KNOWLEDGE CAPTURE (K-CAP '19), 2019, :131-138
[3]  
Clouatre L, 2021, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, P4321
[4]  
Dettmers T, 2018, Arxiv, DOI [arXiv:1707.01476, DOI 10.48550/ARXIV.1707.01476]
[5]  
Guo LB, 2019, Arxiv, DOI [arXiv:1905.04914, DOI 10.48550/ARXIV.1905.04914]
[6]  
Lin YK, 2015, AAAI CONF ARTIF INTE, P2181
[7]  
Nathani D, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P4710
[8]  
Nguyen D.Q., 2018, P 2018 C N AM CHAPT, V2, DOI DOI 10.18653/V1/N18-2053
[9]  
Nickel M., 2011, P 28 INT C MACHINE, P809, DOI DOI 10.5555/3104482.3104584
[10]  
Pan SR, 2024, Arxiv, DOI [arXiv:2306.08302, 10.1109/TKDE.2024.3352100, DOI 10.1109/TKDE.2024.3352100]