Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs

被引:19
作者
Bai, Youhui [1 ]
Li, Cheng [1 ]
Lin, Zhiqi [1 ]
Wu, Yufei [1 ]
Miao, Youshan [2 ]
Liu, Yunxin [2 ]
Xu, Yinlong [1 ,3 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Anhui, Peoples R China
[2] Microsoft Res, Beijing 100080, Peoples R China
[3] Anhui Prov Key Lab High Performance Comp, Hefei 230026, Anhui, Peoples R China
基金
国家重点研发计划;
关键词
Training; Graphics processing units; Loading; Computational modeling; Load modeling; Partitioning algorithms; Deep learning; Graph neural network; cache; large graph; graph partition; pipeline; multi-GPU;
D O I
10.1109/TPDS.2021.3065737
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Emerging graph neural networks (GNNs) have extended the successes of deep learning techniques against datasets like images and texts to more complex graph-structured data. By leveraging GPU accelerators, existing frameworks combine mini-batch and sampling for effective and efficient model training on large graphs. However, this setup faces a scalability issue since loading rich vertex features from CPU to GPU through a limited bandwidth link usually dominates the training cycle. In this article, we propose PaGraph, a novel, efficient data loader that supports general and efficient sampling-based GNN training on single-server with multi-GPU. PaGraph significantly reduces the data loading time by exploiting available GPU resources to keep frequently-accessed graph data with a cache. It also embodies a lightweight yet effective caching policy that takes into account graph structural information and data access patterns of sampling-based GNN training simultaneously. Furthermore, to scale out on multiple GPUs, PaGraph develops a fast GNN-computation-aware partition algorithm to avoid cross-partition access during data-parallel training and achieves better cache efficiency. Finally, it overlaps data loading and GNN computation for further hiding loading costs. Evaluations on two representative GNN models, GCN and GraphSAGE, using two sampling methods, Neighbor and Layer-wise, show that PaGraph could eliminate the data loading time from the GNN training pipeline, and achieve up to 4.8x performance speedup over the state-of-the-art baselines. Together with preprocessing optimization, PaGraph further delivers up to 16.0x end-to-end speedup.
引用
收藏
页码:2541 / 2556
页数:16
相关论文
共 60 条
  • [1] Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/2951913.2976746, 10.1145/3022670.2976746]
  • [2] Streaming Graph Partitioning: An Experimental Study
    Abbas, Zainab
    Kalavri, Vasiliki
    Carbone, Paris
    Vlassov, Vladimir
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2018, 11 (11): : 1590 - 1603
  • [3] [Anonymous], 2018, INT C LEARN REPR
  • [4] [Anonymous], 2016, EFFICIENT HYPERPARAM
  • [5] Bastings J, 2017, P 2017 C EMP METH NA, P1957
  • [6] Battaglia P. W., 2018, ARXIV 180601261
  • [7] Boldi Paolo, 2004, P 13 INT C WORLD WID, P595
  • [8] Chen J., 2018, P INT C LEARNN REPR
  • [9] Chen JF, 2018, PR MACH LEARN RES, V80
  • [10] Chen T., 2015, P NEURAL INF PROCESS