MAKE:Knowledge Graph Embedding via Multi-Attention neural network

被引:0
作者
Liu, Denghui [1 ]
Wang, Yanna [1 ]
Zhou, Zili [1 ]
Dong, Zhaoan [2 ]
机构
[1] Qufu Normal Univ, Sch Cyber Sci & Engn, Qufu, Shandong, Peoples R China
[2] Qufu Normal Univ China, Sch Comp Sci, Qufu, Shandong, Peoples R China
来源
2022 5TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING, MLNLP 2022 | 2022年
关键词
knowledge representation; knowledge graph embedding; neural networks;
D O I
10.1145/3578741.3578815
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graph embedding is a popular method to solve the incompleteness of knowledge graph. At present, research on knowledge graph embedding based on neural network has achieved remarkable results, but most models ignore the influence of the correlation among with subject entity and relation and object entity within triple. Existing attention-based models take into account the effect of correlation, but perform moderately. In this paper, we propose a multi-attention neural network-based embedding model, named MAKE, which utilizes a novel multi-attention mechanism to generate feature maps of triples by computing correlations within triple. To fully exploit the performance of the multi-attention mechanism, MAKE uses a trainable batch normalization method and a novel composite loss function to improve the model learning ability. Evaluation results on FB15K-237 and WN18RR standard datasets show that our MAKE achieves better performance than previous state-of-the-art knowledge graph embedding models.
引用
收藏
页码:347 / 352
页数:6
相关论文
共 24 条
  • [1] Realistic Re-evaluation of Knowledge Graph Completion Methods: An Experimental Study
    Akrami, Farahnaz
    Saeef, Mohammed Samiul
    Zhang, Qingheng
    Hu, Wei
    Li, Chengkai
    [J]. SIGMOD'20: PROCEEDINGS OF THE 2020 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2020, : 1995 - 2010
  • [2] [Anonymous], 2014, P 7 BIENN C INN DAT
  • [3] Bollacker K., 2008, SIGMOD C, P1247, DOI [10.1145/1376616.1376746, DOI 10.1145/1376616.1376746]
  • [4] Bordes A., 2013, P 26 INT C NEUR INF, V2, P2787
  • [5] Che FH, 2020, AAAI CONF ARTIF INTE, V34, P2774
  • [6] Dettmers T, 2018, AAAI CONF ARTIF INTE, P1811
  • [7] Heck L, 2014, 2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), P597, DOI 10.1109/GlobalSIP.2014.7032187
  • [8] Ioffe S, 2015, PR MACH LEARN RES, V37, P448
  • [9] Ji GL, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P687
  • [10] Kernel multi-attention neural network for knowledge graph embedding
    Jiang, Dan
    Wang, Ronggui
    Yang, Juan
    Xue, Lixia
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 227