Few-shot link prediction with meta-learning for temporal knowledge graphs

被引:4
作者
Zhu, Lin [1 ]
Xing, Yizong [2 ]
Bai, Luyi [1 ,3 ]
Chen, Xiwen [4 ]
机构
[1] Northeastern Univ Qinhuangdao, Sch Comp & Commun Engn, Qinhuangdao 066004, Peoples R China
[2] Univ Melbourne, Fac Engn, Melbourne, Vic 3010, Australia
[3] Univ Leicester, Sch Informat, Leicester LE1 7RH, England
[4] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
基金
中国国家自然科学基金;
关键词
few-shot learning; link prediction; meta-learning; temporal knowledge graph;
D O I
10.1093/jcde/qwad016
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With the deepening of the research on knowledge graph embedding, temporal knowledge graphs (TKGs), which are dynamic changes over time, have gradually gained the attention of researchers. Although some TKG-embedding models have been proposed, they did not perform well for certain relationships with insufficient samples, since they all require tremendous training samples. Thus, few-shot link prediction tasks, namely predicting new relation-specific quadruples by observing only a few samples, are still very challenging. In this paper, a method named meta-reasoning for TKGs (MetaRT) is proposed to solve this universal but challenging problem. MetaRT works by extracting the meta-information of a specific relation and updating it quickly, so that the model can learn the most critical information in TKG swiftly and independently. In the meantime, temporal information can be managed well by a TKG learner. Finally, through a large number of experiments, it shows that MetaRT outperforms other existing TKG-embedding models on the problem of few-shot learning.
引用
收藏
页码:711 / 721
页数:11
相关论文
共 26 条
[1]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[2]   FTMF: Few-shot temporal knowledge graph completion based on meta-optimization and fault-tolerant mechanism [J].
Bai, Luyi ;
Zhang, Mingcheng ;
Zhang, Han ;
Zhang, Heng .
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (03) :1243-1270
[3]  
Bordes A., 2013, P 26 INT C NEUR INF, V2, P2787
[4]  
Boschee Elizabeth, 2015, HarvardDataverse, V21
[5]  
Chen MY, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P4217
[6]  
Dasgupta SS, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2001
[7]  
Finn C, 2017, PR MACH LEARN RES, V70
[8]  
García-Durán A, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P4816
[9]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[10]  
Jin W, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P6669