JKRL: Joint Knowledge Representation Learning of Text Description and Knowledge Graph

被引:3
作者
Xu, Guoyan [1 ]
Zhang, Qirui [1 ]
Yu, Du [1 ]
Lu, Sijun [1 ]
Lu, Yuwei [1 ]
机构
[1] Hohai Univ, Coll Comp & Informat, Nanjing 211100, Peoples R China
来源
SYMMETRY-BASEL | 2023年 / 15卷 / 05期
关键词
knowledge graph; representation learning; structure embedding; text description;
D O I
10.3390/sym15051056
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The purpose of knowledge representation learning is to learn the vector representation of research objects projected by a matrix in low-dimensional vector space and explore the relationship between embedded objects in low-dimensional space. However, most methods only consider the triple structure in the knowledge graph and ignore the additional information related to the triple, especially the text description information. In this paper, we propose a knowledge graph representation model with a symmetric architecture called Joint Knowledge Representation Learning of Text Description and Knowledge Graph (JKRL), which models the entity description and relationship description of the triple structure for joint representation learning of knowledge and balances the contribution of the triple structure and text description in the process of vector learning. First, we adopt the TransE model to learn the structural vector representations of entities and relations, and then use a CNN model to encode the entity description to obtain the text representation of the entity. To semantically encode the relation descriptions, we designed an Attention-Bi-LSTM text encoder, which introduces an attention mechanism into the Bi-LSTM model to calculate the semantic relevance between each word in the sentence and different relations. In addition, we also introduce position features into word features in order to better encode word order information. Finally, we define a joint evaluation function to learn the joint representation of structural and textual representations. The experiments show that compared with the baseline methods, our model achieves the best performance on both Mean Rank and Hits@10 metrics. The accuracy of the triple classification task on the FB15K dataset reached 93.2%.
引用
收藏
页数:23
相关论文
共 55 条
[1]  
An B., 2018, NAACL, P745
[2]  
[Anonymous], 2015, P 3 WORKSH CONT VECT
[3]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[4]  
Bollacker K., 2007, P AAAI C ART INT, V2, P1962
[5]  
Bordes A., 2013, P 26 INT C NEURAL IN, P2787
[6]  
Chen MH, 2018, Arxiv, DOI arXiv:1806.06478
[7]  
Chen Yahui., 2015, classification by training a generic convolutional neural network
[8]  
Cochez M., 2018, arXiv
[9]  
Dettmers T, 2018, AAAI CONF ARTIF INTE, P1811
[10]  
Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, 10.48550/arXiv.1810.04805]