A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts

被引:0
|
作者
Cui, Yuanning [1 ]
Sun, Zequn [1 ]
Hu, Wei [1 ]
机构
[1] State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2024年 / 61卷 / 08期
基金
中国国家自然科学基金;
关键词
knowledge graph; pre-training; prompt learning; relational IO graph; rules; universal reasoning;
D O I
10.7544/issn1000-1239.202440133
中图分类号
学科分类号
摘要
A knowledge graph (KG) is a structured knowledge base that stores a massive amount of real-world knowledge, providing data support for numerous knowledge-driven downstream tasks. KGs often suffer from incompleteness, with many missing facts. Therefore, the KG reasoning task aims to infer new conclusions based on known facts to complete the KG. With the research and development of knowledge engineering and its commercial applications, numerous general and domain-specific KGs have been constructed. Existing KG reasoning models mostly focus on completing a single KG but lack general reasoning capabilities. Inspired by the general capabilities of pre-trained large language models in recent years, some pre-trained universal KG reasoning models have been proposed. Addressing the issue of existing pre-trained model being unable to identify high-quality reasoning patterns, we introduce a rule-based pre-trained universal KG reasoning model called RulePreM. It discovers and filters high-quality reasoning rules to enhance the reasoning abilities. The proposed model first constructs a relational IO graph based on reasoning rules and uses an encoder, RuleGNN, to encode the relations. The encoded relations are then used as prompts to encode entities in the KG. Finally, candidate entities are scored for prediction. Additionally, an attention mechanism that combines rule confidence is introduced to further reduce the impact of low-quality reasoning patterns. Experimental results demonstrate that the proposed model exhibits universal reasoning abilities on 43 different KGs, with average performance surpassing existing supervised and pre-trained models. © 2024 Science Press. All rights reserved.
引用
收藏
页码:2030 / 2044
页数:14
相关论文
共 81 条
  • [31] Yang Fang, Xiang Zhao, Zhen Tan, Et al., A Revised translation-based method for knowledge graph representation[J], Journal of Computer Research and Development, 55, 1, (2018)
  • [32] Xiaohui Yang, Rui Wan, Haibin Zhang, Et al., Semantical symbol mapping embedding learning algorithm for knowledge graph[J], Journal of Computer Research and Development, 55, 8, pp. 1773-1784, (2018)
  • [33] Dettmers T, Minervini P, Stenetorp P, Et al., Convolutional 2D knowledge graph embeddings[C], Proc of the 32nd AAAI Conf on Artificial Intelligence, pp. 1811-1818, (2018)
  • [34] Bordes A, Usunier N, Garcia-Duran A, Et al., Translating embeddings for modeling multi-relational data, Proc of Annual Conf on Neural Information Processing Systems 2013, pp. 2787-2795, (2013)
  • [35] Sun Zhiqing, Deng Zhihong, Nie Jianyun, Et al., RotatE: Knowledge graph embedding by relational rotation in complex space[C], Proc of the 7th Int Conf on Learning Representations, pp. 1-18, (2019)
  • [36] Vashishth S, Sanyal S, Nitin V, Et al., Composition-based multi-relational graph convolutional networks[C], Proc of the 8th Int Conf on Learning Representations, pp. 1-16, (2020)
  • [37] Galarraga L A, Teflioudi C, Hose K, Et al., AMIE: Association rule mining under incomplete evidence in ontological knowledge bases[C], Proc of the 22nd Int Conf on World Wide Web, pp. 413-422, (2013)
  • [38] Yang Fan, Yang Zhilin, Cohen W W., Differentiable learning of logical rules for knowledge base reasoning[C], Proc of Annual Conf on Neural Information Processing Systems 2017, pp. 2319-2328, (2017)
  • [39] Sadeghian A, Armandpour M, Ding P, Et al., DRUM: End-to-End differentiable rule mining on knowledge graphs[C], Proc of Annual Conf on Neural Information Processing Systems 2019, pp. 15321-15331, (2019)
  • [40] Meilicke C, Chekol M W, Ruffinelli D, Et al., Anytime bottom-up rule learning for knowledge graph completion, Proc of the 28th Int Joint Conf on Artificial Intelligence, pp. 3137-3143, (2019)