Linear self-attention with multi-relational graph for knowledge graph completion

被引:0
作者
Liu, Weida [1 ]
Qiang, Baohua [1 ]
Chen, Ruidong [1 ]
Xie, Yuan [1 ]
Chen, Lirui [1 ]
Chen, Zhiqin [1 ]
机构
[1] Guilin Univ Elect Technol, Guangxi Key Lab Image & Graph Intelligent Proc, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge graph completion; Knowledge graph embedding; Graph convolutional networks; Linear self-attention;
D O I
10.1007/s10489-025-06592-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graph completion (KGC) aims to infer missing facts based on the existing knowledge. Graph Convolutional Networks (GCNs) have gained significant traction due to their proficiency in effectively modeling graph structures, especially within the realm of Knowledge Graph Completion (KGC). In GCN-based KGC methodologies, GCNs are initially employed to generate comprehensive representations of entities, followed by the application of Knowledge Graph Embedding (KGE) models to elucidate the interactions among entities and relations. However, most GCN-based KGC models ignore the long-range pairwise relationships in the graph. To address these limitations and enhance KGC, we propose a model called Linear Self-Attention with Multi-Relational Graph Network (LTRGN). Specifically, this model merges GCN and linear self-attention to serve as the encoder. This model introduces a linear self-attention that can capture long-range node dependencies without introducing excessive computational overhead. Furthermore, we implement an attention mechanism designed to better assess the significance of various neighboring nodes relative to the source node. We demonstrate the effectiveness of the proposed LTRGN on the standard FB15k-237, WN18RR, Kinship, and UMLS datasets. On the dense graphs Kinship and UMLS, the MRR of our model improves by 1.3% and 4.1%, respectively, while Hits@1 increases by 1.7% and 6.4% compared to the best-performing model. The results show the efficacy of the model for the KGC task. The code is released at https://github.com/lixianqingliuyan/LTRGN.
引用
收藏
页数:15
相关论文
共 59 条
[1]  
Ahmad WU, 2021, AAAI CONF ARTIF INTE, V35, P12462
[2]   DBpedia: A nucleus for a web of open data [J].
Auer, Soeren ;
Bizer, Christian ;
Kobilarov, Georgi ;
Lehmann, Jens ;
Cyganiak, Richard ;
Ives, Zachary .
SEMANTIC WEB, PROCEEDINGS, 2007, 4825 :722-+
[3]   Self-attention presents low-dimensional knowledge graph embeddings for link prediction [J].
Baghershahi, Peyman ;
Hosseini, Reshad ;
Moradi, Hadi .
KNOWLEDGE-BASED SYSTEMS, 2023, 260
[4]  
Balazevic I, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P5185
[5]   Hypernetwork Knowledge Graph Embeddings [J].
Balazevic, Ivana ;
Allen, Carl ;
Hospedales, Timothy M. .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 :553-565
[6]   Multi-head multi-order graph attention networks [J].
Ben, Jie ;
Sun, Qiguo ;
Liu, Keyu ;
Yang, Xibei ;
Zhang, Fengjun .
APPLIED INTELLIGENCE, 2024, 54 (17-18) :8092-8107
[7]  
Bollacker Kurt., P 2008 ACM SIGMOD IN
[8]  
Bolya D., 2022, EUR C COMP VIS, P35
[9]  
Bordes A., 2013, P 26 INT C NEURAL IN, P2787
[10]  
Carlson A, 2010, AAAI CONF ARTIF INTE, P1306