Distributed large-scale graph processing on FPGAs

被引:2
作者
Sahebi, Amin [1 ,2 ]
Barbone, Marco [3 ]
Procaccini, Marco [1 ,5 ]
Luk, Wayne [3 ]
Gaydadjiev, Georgi [3 ,4 ]
Giorgi, Roberto [1 ,5 ]
机构
[1] Univ Siena, Dept Informat Engn & Math, Siena, Italy
[2] Univ Florence, Dept Informat Engn, Florence, Italy
[3] Imperial Coll London, Dept Comp, London, England
[4] Delft Univ Technol, Dept Quantum & Comp Engn, Delft, Netherlands
[5] Consorzio Interuniv Nazl Informat, Rome, Italy
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
Graph processing; Distributed computing; Grid partitioning; FPGA; Accelerators; MODEL;
D O I
10.1186/s40537-023-00756-x
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Processing large-scale graphs is challenging due to the nature of the computation that causes irregular memory access patterns. Managing such irregular accesses may cause significant performance degradation on both CPUs and GPUs. Thus, recent research trends propose graph processing acceleration with Field-Programmable Gate Arrays (FPGA). FPGAs are programmable hardware devices that can be fully customised to perform specific tasks in a highly parallel and efficient manner. However, FPGAs have a limited amount of on-chip memory that cannot fit the entire graph. Due to the limited device memory size, data needs to be repeatedly transferred to and from the FPGA on-chip memory, which makes data transfer time dominate over the computation time. A possible way to overcome the FPGA accelerators' resource limitation is to engage a multi-FPGA distributed architecture and use an efficient partitioning scheme. Such a scheme aims to increase data locality and minimise communication between different partitions. This work proposes an FPGA processing engine that overlaps, hides and customises all data transfers so that the FPGA accelerator is fully utilised. This engine is integrated into a framework for using FPGA clusters and is able to use an offline partitioning method to facilitate the distribution of large-scale graphs. The proposed framework uses Hadoop at a higher level to map a graph to the underlying hardware platform. The higher layer of computation is responsible for gathering the blocks of data that have been pre-processed and stored on the host's file system and distribute to a lower layer of computation made of FPGAs. We show how graph partitioning combined with an FPGA architecture will lead to high performance, even when the graph has Millions of vertices and Billions of edges. In the case of the PageRank algorithm, widely used for ranking the importance of nodes in a graph, compared to state-of-the-art CPU and GPU solutions, our implementation is the fastest, achieving a speedup of 13 compared to 8 and 3 respectively. Moreover, in the case of the large-scale graphs, the GPU solution fails due to memory limitations while the CPU solution achieves a speedup of 12 compared to the 26x achieved by our FPGA solution. Other state-of-the-art FPGA solutions are 28 times slower than our proposed solution. When the size of a graph limits the performance of a single FPGA device, our performance model shows that using multi-FPGAs in a distributed system can further improve the performance by about 12x. This highlights our implementation efficiency for large datasets not fitting in the on-chip memory of a hardware device.
引用
收藏
页数:28
相关论文
共 74 条
  • [61] A Comprehensive Survey on Graph Neural Networks
    Wu, Zonghan
    Pan, Shirui
    Chen, Fengwen
    Long, Guodong
    Zhang, Chengqi
    Yu, Philip S.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) : 4 - 24
  • [62] Xie C., 2014, ADV NEURAL INFORM PR, P1
  • [63] Xilinx Vitis library, 2023, PAGERANK
  • [64] Xinyu Chen, 2021, FPGA '21: The 2021 ACM/SIGDA International Symposium on Field-Programmable, P69, DOI 10.1145/3431920.3439290
  • [65] Yan M, 2019, INT SYM QUAL ELECT, P1, DOI 10.1109/ISQED.2019.8697584
  • [66] Characterizing and Understanding GCNs on GPU
    Yan, Mingyu
    Chen, Zhaodong
    Deng, Lei
    Ye, Xiaochun
    Zhang, Zhimin
    Fan, Dongrui
    Xie, Yuan
    [J]. IEEE COMPUTER ARCHITECTURE LETTERS, 2020, 19 (01) : 22 - 25
  • [67] An Efficient Graph Accelerator with Parallel Data Conflict Management
    Yao, Pengcheng
    Zheng, Long
    Liao, Xiaofei
    Jin, Hai
    He, Bingsheng
    [J]. 27TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT 2018), 2018,
  • [68] HitGraph: High-throughput Graph Processing Framework on FPGA
    Zhou, Shijie
    Kannan, Rajgopal
    Prasanna, Viktor K.
    Seetharaman, Guna
    Wu, Qing
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 30 (10) : 2249 - 2264
  • [69] An FPGA Framework for Edge-Centric Graph Processing
    Zhou, Shijie
    Kannan, Rajgopal
    Zeng, Hanqing
    Prasanna, Viktor K.
    [J]. 2018 ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS, 2018, : 69 - 77
  • [70] Accelerating Graph Analytics on CPU-FPGA Heterogeneous Platform
    Zhou, Shijie
    Prasanna, Viktor K.
    [J]. 2017 29TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD), 2017, : 137 - 144