Dynamic Strategies for High Performance Training of Knowledge Graph Embeddings

被引:0
作者
Panda, Anwesh [1 ]
Vadhiyar, Sathish [1 ]
机构
[1] Indian Inst Sci, Dept Computat & Data, Bangalore, India
来源
51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022 | 2022年
关键词
Knowledge graph embeddings; communication minimization; gradient quantization; selection of gradient vectors;
D O I
10.1145/3545008.3545075
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Knowledge graph embeddings (KGEs) are the low dimensional representations of entities and relations between the entities. They can be used for various downstream tasks such as triple classification, link prediction, knowledge base completion, etc. Training these embeddings for a large dataset takes a huge amount of time. This work proposes strategies to make the training of KGEs faster in a distributed memory parallel environment. The first strategy is to choose between either an all-gather or an all-reduce operation based on the sparsity of the gradient matrix. The second strategy focuses on selecting those gradient vectors which significantly contribute to the reduction in the loss. The third strategy employs gradient quantization to reduce the number of bits to be communicated. The fourth strategy proposes to split the knowledge graph triples based on relations so that inter-node communication for the gradient matrix corresponding to the relation embedding matrix is eliminated. The fifth and last strategy is to select the negative triple which the model finds difficult to classify. All the strategies are combined and this allows us to train the ComplEx Knowledge Graph Embedding (KGE) model on the FB250K dataset in 6 hours with 16 nodes when compared to 11.5 hours taken to train on the same number of nodes without applying any of the above optimizations. This reduction in training time is also accompanied by a significant improvement in Mean Reciprocal Rank (MRR) and Triple Classification Accuracy (TCA).
引用
收藏
页数:10
相关论文
共 23 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Alistarh D, 2017, Arxiv, DOI arXiv:1610.02132
[3]  
Bordes A., 2013, NIPS'13, P1
[4]  
Cho M., 2019, GradZip: Gradient Compression using Alternating Matrix Factorization for Large-scale Deep Learning
[5]  
Aji AF, 2017, Arxiv, DOI arXiv:1704.05021
[6]  
Goyal P, 2018, Arxiv, DOI arXiv:1706.02677
[7]   Fast and Accurate Learning of Knowledge Graph Embeddings at Scale [J].
Gupta, Udit ;
Vadhiyar, Sathish .
2019 IEEE 26TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC), 2019, :173-182
[8]  
Han X, 2018, CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018): PROCEEDINGS OF SYSTEM DEMONSTRATIONS, P139
[9]  
Karimireddy SP, 2019, PR MACH LEARN RES, V97
[10]  
Kingma DP, 2014, ADV NEUR IN, V27