Knowledge Graph Reasoning via Dynamic Subgraph Attention with Low Resource Computation

被引:4
|
作者
Wang, Yin [1 ,2 ]
Xia, Nan [1 ,2 ]
Yu, Hang [1 ]
Luo, Xiangfeng [1 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China
[2] Shanghai ArtiTech AI Technol Co Ltd, Shanghai, Peoples R China
基金
上海市自然科学基金; 中国国家自然科学基金;
关键词
Knowledge graph reasoning; Dynamic Subgraph Attention; Multi-hop path history; Path-based learning; Embedding-based learning;
D O I
10.1016/j.neucom.2024.127866
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graphs (KGs) suffer from inherent incompleteness, which has spurred research into knowledge graph reasoning (KGR), i.e., ways to infer missing facts based on existing triples. The most prevalent approaches employ a static mechanism for entity representation across the entire graph, sharing entity embeddings for different query relations. Nevertheless, these static entity embeddings fail to capture specific semantics for various query scenarios, resulting in inaccurate entity representations and susceptibility to reasoning errors. Furthermore, the whole graph learning style requires substantial computational resources, especially since KGs are often of large-scale. Consequently, these methods are impractical in low -resource environments. To address these issues, we devise a framework called dynamic subgraph attention (DSA), which learns dynamic entity embeddings for different query relations across subgraphs. In our approach, by utilizing multi -hop path history information obtained through path -based learning, we guide a dynamic attention mechanism to generate dynamic entity embeddings for different query relations. To ensure the semantic information of entities, the embedding -based and path -based learning are jointly trained for KGR. Additionally, the proposed dynamic aggregation mechanism operates on subgraphs, resulting in a remarkable 6-7 times resource conservation compared to GPU computations. Empirical experiments further demonstrate that DSA outperforms the current methods significantly on three benchmark datasets.
引用
收藏
页数:12
相关论文
共 10 条
  • [1] Target relational attention-oriented knowledge graph reasoning
    Zhao, Xiaojuan
    Jia, Yan
    Li, Aiping
    Jiang, Rong
    Chen, Kai
    Wang, Ye
    NEUROCOMPUTING, 2021, 461 : 577 - 586
  • [2] Adversary and Attention Guided Knowledge Graph Reasoning Based on Reinforcement Learning
    Yu, Yanhua
    Cai, Xiuxiu
    Ma, Ang
    Ren, Yimeng
    Zhen, Shuai
    Li, Jie
    Lu, Kangkang
    Huang, Zhiyong
    Chua, Tat-Seng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT V, KSEM 2024, 2024, 14888 : 3 - 16
  • [3] Reinforcement knowledge graph reasoning based on dual agents and attention mechanism
    Yang, Xu-Hua
    Wang, Tao
    Gan, Ji-Song
    Gao, Liang-Yu
    Ma, Gang-Feng
    Zhou, Yan-Bo
    APPLIED INTELLIGENCE, 2025, 55 (06)
  • [4] A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic, and Multi-Modal
    Liang, Ke
    Meng, Lingyuan
    Liu, Meng
    Liu, Yue
    Tu, Wenxuan
    Wang, Siwei
    Zhou, Sihang
    Liu, Xinwang
    Sun, Fuchun
    He, Kunlun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9456 - 9478
  • [5] A hierarchical and interlamination graph self-attention mechanism-based knowledge graph reasoning architecture
    Wu, Yuejia
    Zhou, Jian-tao
    INFORMATION SCIENCES, 2025, 686
  • [6] HiAM: A Hierarchical Attention based Model for knowledge graph multi-hop reasoning
    Ma, Ting
    Lv, Shangwen
    Huang, Longtao
    Hu, Songlin
    NEURAL NETWORKS, 2021, 143 : 261 - 270
  • [7] Temporal knowledge graph reasoning based on relation graphs and time-guided attention mechanism
    Hu, Jie
    Zhu, Yinglian
    Teng, Fei
    Li, Tianrui
    KNOWLEDGE-BASED SYSTEMS, 2024, 301
  • [8] Attention-based exploitation and exploration strategy for multi-hop knowledge graph reasoning
    Shang, Bin
    Zhao, Yinliang
    Liu, Yifan
    Wang, Chenxin
    INFORMATION SCIENCES, 2024, 653
  • [9] A neighborhood-aware graph self-attention mechanism-based pre-training model for Knowledge Graph Reasoning
    Wu, Yuejia
    Zhou, Jian-tao
    INFORMATION SCIENCES, 2023, 647
  • [10] Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples
    Wang, Shaofei
    APPLIED SCIENCES-BASEL, 2025, 15 (03):