Spatial Data Dependence Graph Based Pre-RTL Simulator for Convolutional Neural Network Dataflows

被引:4
作者
Wang, Jooho [1 ]
Park, Sungkyung [2 ]
Park, Chester Sungchung [1 ]
机构
[1] Konkuk Univ, Dept Elect & Elect Engn, Seoul 05029, South Korea
[2] Pusan Natl Univ, Dept Elect Engn, Pusan 46241, South Korea
关键词
Hardware acceleration; Memory management; Convolutional neural networks; Bandwidth; Spatial databases; Registers; Power demand; Convolutional neural networks (CNNs); data dependence graph; design space exploration (DSE); hardware accelerators; latency-insensitive controller; pre-RTL simulator; spatial data dependence graph (SDDG); ARCHITECTURE; PERFORMANCE; INFERENCE; COST; DRAM;
D O I
10.1109/ACCESS.2022.3146413
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, a new pre-RTL simulator is proposed to predict the power, performance, and area of convolutional neural network (CNN) dataflows prior to register-transfer-level (RTL) design. In the simulator, a novel approach is adopted to implement a spatial data dependence graph (SDDG), which enables us to model a specific dataflow alongside inter-instruction dependencies by tracking the status of each processing element (PE). In addition, the proposed pre-RTL simulator makes it possible to evaluate the impact of memory constraints such as latency and bandwidth. The latency-insensitive and bandwidth-insensitive PE controllers assumed in the proposed pre-RTL simulator guarantee both functional correctness and maximum performance, regardless of memory constraints. In particular, it is shown that the optimal distribution method of local memory bandwidth can reduce the accelerator execution time by up to 37.6% compared with the equal distribution method. For weight stationary (WS) and row stationary (RS) dataflows, the accelerator performance closely depends on memory constraints. The simulation results also show that the relative performances of dataflows depend on the layer shape of the convolutional layer. For example, for an identical hardware area in a standard convolutional layer of AlexNet, WS dataflows do not provide any performance gain over RS dataflows when the memory latency is sufficiently high. In addition, WS dataflows cannot fully reuse the input activation, thereby increasing local memory accesses, since the number of weights loaded at a specific time is limited. Moreover, in a depth-wise convolutional layer of MobileNet, WS dataflows tend to outperform RS dataflows even in the presence of large memory latency. The source code is available on the GitHub repository: https://github.com/SDL-KU/SDDGSim.
引用
收藏
页码:11382 / 11403
页数:22
相关论文
共 65 条
  • [1] [Anonymous], 2011, CVPR 2011 WORKSH
  • [2] Enabling a Reliable STT-MRAM Main Memory Simulation
    Asifuzzaman, Kazi
    Sanchez Verdejo, Rommel
    Radojkovic, Petar
    [J]. MEMSYS 2017: PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MEMORY SYSTEMS, 2017, : 283 - 292
  • [3] Polymorphic Accelerators for Deep Neural Networks
    Azizimazreah, Arash
    Chen, Lizhong
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (03) : 534 - 546
  • [4] DORY: Automatic End-to-End Deployment of Real-World DNNs on Low-Cost IoT MCUs
    Burrello, Alessio
    Garofalo, Angelo
    Bruschi, Nazareno
    Tagliavini, Giuseppe
    Rossi, Davide
    Conti, Francesco
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (08) : 1253 - 1268
  • [5] Theory of latency-insensitive design
    Carloni, LP
    McMillan, KL
    Sangiovanni-Vincentelli, AL
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2001, 20 (09) : 1059 - 1076
  • [6] Optimizing Temporal Convolutional Network Inference on FPGA-Based Accelerators
    Carreras, Marco
    Deriu, Gianfranco
    Raffo, Luigi
    Benini, Luca
    Meloni, Paolo
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2020, 10 (03) : 348 - 361
  • [7] Origami: A 803-GOp/s/W Convolutional Network Accelerator
    Cavigelli, Lukas
    Benini, Luca
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (11) : 2461 - 2475
  • [8] Accelerating Real-Time Embedded Scene Labeling with Convolutional Networks
    Cavigelli, Lukas
    Magno, Michele
    Benini, Luca
    [J]. 2015 52ND ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2015,
  • [9] Chen Guoguo, 2014, ICASSP, P4087, DOI DOI 10.1109/ICASSP.2014.68543702[5]T
  • [10] Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
    Chen, Yu-Hsin
    Krishna, Tushar
    Emer, Joel S.
    Sze, Vivienne
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) : 127 - 138