Two flexible translation-based models for knowledge graph embedding

被引:5
作者
Li, Zepeng [1 ]
Huang, Rikui [1 ]
Zhang, Yufeng [1 ]
Zhu, Jianghong [1 ]
Hu, Bin [1 ,2 ,3 ,4 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Gansu Prov Key Lab Wearable Comp, Lanzhou, Peoples R China
[2] Beijing Inst Technol, Inst Engn Med, Beijing, Peoples R China
[3] Chinese Acad Sci, CAS Ctr Excellence Brain Sci, Shanghai Inst Biol Sci, Shanghai, Peoples R China
[4] Chinese Acad Sci, Inst Biol Sci, Shanghai Inst Biol Sci, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge graph embedding; translation model; complex relation; single-directional-flexible model; multidirectional-flexible model;
D O I
10.3233/JIFS-211553
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge Graph Embedding (KGE), which aims to embed the entities and relations of a knowledge graph into a low-dimensional continuous space, has been proven to be an effective method for completing a knowledge graph and improving the quality of the knowledge graph. The translation-based models represented by TransE, TransH, TransR and TransD have achieved great success in this regard. There is still potential for improvement in dealing with complex relations. In this paper, we find that the lack of flexibility in entity embedding limits the model's ability to model complex relations. Therefore, we propose single-directional-flexible (sdf) models and multi-directional-flexible (mdf) models to increase the flexibility and expressiveness of entity embeddings. These two methods can be applied to the TransD model and its variant models without increasing any time cost and space cost. We conduct experiments on benchmarks such as WN18 and FB15k. The experimental results show that the models significantly surpasses the classical translation models in both tasks of triplet classification and link prediction. In particular, for Hits@1 of link prediction of WN18, we get 71.7% after applying our method to TransD, which is much better than 24.1% of TransD.
引用
收藏
页码:3093 / 3105
页数:13
相关论文
共 37 条
[1]  
[Anonymous], 2015, P 2015 C EMP METH NA
[2]  
Balazevic I, 2019, ADV NEUR IN, V32
[3]  
Bordes A., 2013, P 26 INT C NEUR INF, V2, P2787
[4]   A semantic matching energy function for learning with multi-relational data Application to word-sense disambiguation [J].
Bordes, Antoine ;
Glorot, Xavier ;
Weston, Jason ;
Bengio, Yoshua .
MACHINE LEARNING, 2014, 94 (02) :233-259
[5]   Feature-level fusion approaches based on multimodal EEG data for depression recognition [J].
Cai, Hanshu ;
Qu, Zhidiao ;
Li, Zhe ;
Zhang, Yi ;
Hu, Xiping ;
Hu, Bin .
INFORMATION FUSION, 2020, 59 (59) :127-138
[6]  
Chami Ines, 2020, P 58 ANN M ASS COMP, P6901, DOI DOI 10.18653/V1/2020.ACL-MAIN.617
[7]  
Zeiler MD, 2012, Arxiv, DOI arXiv:1212.5701
[8]  
Dettmers T, 2018, AAAI CONF ARTIF INTE, P1811
[9]  
Dong L, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P260
[10]  
Han X., 2018, P 2018 C EMPIRICAL M