DRGN: a dynamically reconfigurable accelerator for graph neural networks

被引:1
|
作者
Yang C. [1 ]
Huo K.-B. [1 ]
Geng L.-F. [1 ]
Mei K.-Z. [1 ]
机构
[1] School of Microelectronics, Xi’an Jiaotong University, No. 28 Xianning Road, Beilin District, Xi’an
基金
中国国家自然科学基金;
关键词
Data storage; Dynamic reconfigurable computing; Graph neural network; Prefetcher; Vertex reordering;
D O I
10.1007/s12652-022-04402-x
中图分类号
学科分类号
摘要
Graph neural networks (GNNs) have achieved great success in processing non-Euclidean geometric spatial data structures. However, the irregular memory access of aggregation and the power-law distribution of the real-world graph challenge the existing memory hierarchy and caching policy of CPUs and GPUs. Meanwhile, after the emergence of an increasing number of GNN algorithms, higher requirements have been established for the flexibility of the hardware architecture. In this work, we design a dynamically reconfigurable GNN accelerator (named DRGN) supporting multiple GNN algorithms. Specifically, we first propose a vertex reordering algorithm and an adjacency matrix compressing algorithm to improve the graph data locality. Furthermore, to improve bandwidth utilization and the reuse rate of node features, we proposed a dedicatedly designed prefetcher to significantly improve hit rate. Finally, we proposed a scheduling mechanism to assign tasks to PE units to address the issue of workload imbalance. The effectiveness of proposed DRGN accelerator was evaluated using three GNN algorithms, including PageRank, GCN, and GraphSage. Compared to the execution time of these three GNN algorithms on CPU, performing PageRank algorithm on DRGN can achieve speedup by 231×, the GCN algorithm can achieve speedup by 150× on DRGN, and the GraphSage algorithm can achieve speedup by 39× when executed on DRGN. Compared with state-of-the-art GNN accelerators, DRGN can achieve higher energy-efficiency under the condition of relative lower-end process. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
引用
收藏
页码:8985 / 9000
页数:15
相关论文
共 50 条
  • [1] RNA: A Flexible and Efficient Accelerator Based on Dynamically Reconfigurable Computing for Multiple Convolutional Neural Networks
    Yang, Chen
    Hou, Jia
    Wang, Yizhou
    Zhang, Haibo
    Wang, Xiaoli
    Geng, Li
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (16)
  • [2] A Reconfigurable Multithreaded Accelerator for Recurrent Neural Networks
    Que, Zhiqiang
    Nakahara, Hiroki
    Fan, Hongxiang
    Meng, Jiuxi
    Tsoi, Kuen Hung
    Niu, Xinyu
    Nurvitadhi, Eriko
    Luk, Wayne
    2020 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2020), 2020, : 20 - 28
  • [3] Design of a Generic Dynamically Reconfigurable Convolutional Neural Network Accelerator with Optimal Balance
    Tong, Haoran
    Han, Ke
    Han, Si
    Luo, Yingqi
    ELECTRONICS, 2024, 13 (04)
  • [4] Implementation of artificial neural networks on a reconfigurable hardware accelerator
    Porrmann, M
    Witkowski, U
    Kalte, H
    Rückert, U
    10TH EUROMICRO WORKSHOP ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING, PROCEEDINGS, 2002, : 243 - 250
  • [5] An Efficient Reconfigurable Hardware Accelerator for Convolutional Neural Networks
    Ansari, Anaam
    Gunnam, Kiran
    Ogunfunmi, Tokunbo
    2017 FIFTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2017, : 1337 - 1341
  • [6] DeepRecon: Dynamically Reconfigurable Architecture for Accelerating Deep Neural Networks
    Rzayev, Tayyar
    Moradi, Saber
    Albonesi, David H.
    Manohar, Rajit
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 116 - 124
  • [7] Aurora: A Versatile and Flexible Accelerator for Graph Neural Networks
    Yang, Jiaqi
    Zheng, Hao
    Louri, Ahmed
    PROCEEDINGS 2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS 2024, 2024, : 890 - 902
  • [8] Automated Accelerator Optimization Aided by Graph Neural Networks
    Sohrabizadeh, Atefeh
    Bai, Yunsheng
    Sun, Yizhou
    Cong, Jason
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 55 - 60
  • [9] High performance reconfigurable accelerator for deep convolutional neural networks
    Qiao R.
    Chen G.
    Gong G.
    Lu H.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (03): : 130 - 139
  • [10] A dynamically reconfigurable WDM LAN based on reconfigurable circulant graph
    Reddy, EVRCM
    Reddy, KC
    MILCOM 96, CONFERENCE PROCEEDINGS, VOLS 1-3, 1996, : 786 - 790