Are SNNs Really More Energy-Efficient Than ANNs? an In-Depth Hardware-Aware Study

被引:18
作者
Dampfhoffer, Manon [1 ]
Mesquida, Thomas [2 ]
Valentian, Alexandre [2 ]
Anghel, Lorena [1 ]
机构
[1] Univ Grenoble Alpes, CEA, CNRS, Grenoble INP,INAC Spintec, F-38000 Grenoble, France
[2] Univ Grenoble Alpes, CEA, List, Grenoble, France
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2023年 / 7卷 / 03期
关键词
Neurons; Synapses; Energy consumption; Computer architecture; Hardware; Computational modeling; Membrane potentials; Spiking neural networks; neuromorphic hardware; deep neural network accelerators; energy efficiency; DEEP NEURAL-NETWORKS;
D O I
10.1109/TETCI.2022.3214509
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs) hold the promise of lower energy consumption in embedded hardware due to their spike-based computations compared to traditional Artificial Neural Networks (ANNs). The relative energy efficiency of this emerging technology compared to traditional digital hardware has not been fully explored. Many studies do not consider memory accesses, which account for an important fraction of the energy consumption, use naive ANN hardware implementations, or lack generality. In this paper, we compare the relative energy efficiency of classical digital implementations of ANNs with novel event-based SNN implementations based on variants of the Integrate and Fire (IF) model. We provide a theoretical upper bound on the relative energy efficiency of ANNs, by computing the maximum possible benefit from ANN data reuse and sparsity. We also use the Eyeriss ANN accelerator as a case study. We show that the simpler IF model is more energy-efficient than the Leaky IF and temporal continuous synapse models. Moreover, SNNs with the IF model can compete with efficient ANN implementations when there is a very high spike sparsity, i.e. between 0.15 and 1.38 spikes per synapse per inference, depending on the ANN implementation. Our analysis shows that hybrid ANN-SNN architectures, leveraging a SNN event-based approach in layers with high sparsity and ANN parallel processing for the others, are a promising new path for further energy savings.
引用
收藏
页码:731 / 741
页数:11
相关论文
共 33 条
  • [1] Bing Han, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12355), P388, DOI 10.1007/978-3-030-58607-2_23
  • [2] Point-to-point connectivity between neuromorphic chips using address events
    Boahen, KA
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2000, 47 (05) : 416 - 434
  • [3] Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
    Chen, Yu-Hsin
    Yange, Tien-Ju
    Emer, Joel S.
    Sze, Vivienne
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2019, 9 (02) : 292 - 308
  • [4] Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
    Chen, Yu-Hsin
    Krishna, Tushar
    Emer, Joel S.
    Sze, Vivienne
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2017, 52 (01) : 127 - 138
  • [5] Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    [J]. 2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, : 367 - 379
  • [6] Corradi Federico, 2021, DroneSE and RAPIDO '21: Proceedings of the 2021 Drone Systems Engineering and Rapid Simulation and Performance Evaluation: Methods and Tools Proceedings, P9, DOI 10.1145/3444950.3444951
  • [7] Comparison of Artificial and Spiking Neural Networks on Digital Hardware
    Davidson, Simon
    Furber, Steve B.
    [J]. FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [8] Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
    Davies, Mike
    Srinivasa, Narayan
    Lin, Tsung-Han
    Chinya, Gautham
    Cao, Yongqiang
    Choday, Sri Harsha
    Dimou, Georgios
    Joshi, Prasad
    Imam, Nabil
    Jain, Shweta
    Liao, Yuyun
    Lin, Chit-Kwan
    Lines, Andrew
    Liu, Ruokun
    Mathaikutty, Deepak
    Mccoy, Steve
    Paul, Arnab
    Tse, Jonathan
    Venkataramanan, Guruguhanathan
    Weng, Yi-Hsin
    Wild, Andreas
    Yang, Yoonseok
    Wang, Hong
    [J]. IEEE MICRO, 2018, 38 (01) : 82 - 99
  • [9] Dermatologist-level classification of skin cancer with deep neural networks
    Esteva, Andre
    Kuprel, Brett
    Novoa, Roberto A.
    Ko, Justin
    Swetter, Susan M.
    Blau, Helen M.
    Thrun, Sebastian
    [J]. NATURE, 2017, 542 (7639) : 115 - +
  • [10] Gerstner W, 2014, NEURONAL DYNAMICS: FROM SINGLE NEURONS TO NETWORKS AND MODELS OF COGNITION, P1, DOI 10.1017/CBO9781107447615