Dynamic Strategies for High Performance Training of Knowledge Graph Embeddings

被引:0
|
作者
Panda, Anwesh [1 ]
Vadhiyar, Sathish [1 ]
机构
[1] Indian Inst Sci, Dept Computat & Data, Bangalore, India
来源
51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022 | 2022年
关键词
Knowledge graph embeddings; communication minimization; gradient quantization; selection of gradient vectors;
D O I
10.1145/3545008.3545075
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Knowledge graph embeddings (KGEs) are the low dimensional representations of entities and relations between the entities. They can be used for various downstream tasks such as triple classification, link prediction, knowledge base completion, etc. Training these embeddings for a large dataset takes a huge amount of time. This work proposes strategies to make the training of KGEs faster in a distributed memory parallel environment. The first strategy is to choose between either an all-gather or an all-reduce operation based on the sparsity of the gradient matrix. The second strategy focuses on selecting those gradient vectors which significantly contribute to the reduction in the loss. The third strategy employs gradient quantization to reduce the number of bits to be communicated. The fourth strategy proposes to split the knowledge graph triples based on relations so that inter-node communication for the gradient matrix corresponding to the relation embedding matrix is eliminated. The fifth and last strategy is to select the negative triple which the model finds difficult to classify. All the strategies are combined and this allows us to train the ComplEx Knowledge Graph Embedding (KGE) model on the FB250K dataset in 6 hours with 16 nodes when compared to 11.5 hours taken to train on the same number of nodes without applying any of the above optimizations. This reduction in training time is also accompanied by a significant improvement in Mean Reciprocal Rank (MRR) and Triple Classification Accuracy (TCA).
引用
收藏
页数:10
相关论文
共 43 条
  • [41] A survey on knowledge graph embeddings with literals: Which model links better literal-ly?
    Gesese, Genet Asefa
    Biswas, Russa
    Alam, Mehwish
    Sack, Harald
    SEMANTIC WEB, 2021, 12 (04) : 617 - 647
  • [42] Learning Translation-Based Knowledge Graph Embeddings by N-Pair Translation Loss
    Song, Hyun-Je
    Kim, A-Yeong
    Park, Seong-Bae
    APPLIED SCIENCES-BASEL, 2020, 10 (11):
  • [43] Bi2E: Bidirectional Knowledge Graph Embeddings Based on Subject-Object Feature Spaces
    Zhe, Wang
    Li, Xiaomei
    Guo, Zhongwen
    COOPERATIVE INFORMATION SYSTEMS (COOPIS 2022), 2022, 13591 : 3 - 18