Are SNNs Truly Energy-efficient? - A Hardware Perspective

被引:1
|
作者
Bhattacharjee, Abhiroop [1 ]
Yin, Ruokai [1 ]
Moitra, Abhishek [1 ]
Panda, Priyadarshini [1 ]
机构
[1] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024) | 2024年
基金
美国国家科学基金会;
关键词
Spiking Neural Networks; Systolic-arrays; In-memory Computing; Crossbars; Energy-efficiency;
D O I
10.1109/ICASSP48485.2024.10448269
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations. While recent SNN algorithmic advances achieve high accuracy on large-scale computer vision tasks, their energy-efficiency claims rely on certain impractical estimation metrics. This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim. SATA is a sparsity-aware systolic-array accelerator, while SpikeSim evaluates SNNs implemented on In-Memory Computing (IMC) based analog crossbars. Using these tools, we find that the actual energy-efficiency improvements of recent SNN algorithmic works differ significantly from their estimated values due to various hardware bottlenecks. We identify and addresses key roadblocks to efficient SNN deployment on hardware, including repeated computations & data movements over timesteps, neuronal module overhead and vulnerability of SNNs towards crossbar non-idealities.
引用
收藏
页码:13311 / 13315
页数:5
相关论文
共 50 条
  • [1] Are SNNs Really More Energy-Efficient Than ANNs? an In-Depth Hardware-Aware Study
    Dampfhoffer, Manon
    Mesquida, Thomas
    Valentian, Alexandre
    Anghel, Lorena
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (03): : 731 - 741
  • [2] Energy-Efficient Hardware Data Prefetching
    Guo, Yao
    Narayanan, Pritish
    Bennaser, Mahmoud Abdullah
    Chheda, Saurabh
    Moritz, Csaba Andras
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2011, 19 (02) : 250 - 263
  • [3] On the design of energy-efficient hardware transactional memorysystems
    Gaona, E.
    Titos, R.
    Fernandez, J.
    Acacio, M. E.
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2013, 25 (06) : 862 - 880
  • [4] Energy-efficient Hardware Design for Spiking Neural Networks (Extended Abstract)
    Moitra, Abhishek
    Yin, Ruokai
    Panda, Priyadarshini
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 543 - 544
  • [5] Energy-Efficient Adaptive Hardware Accelerator for Text Mining Application Kernels
    Karam, Robert
    Puri, Ruchir
    Bhunia, Swarup
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 24 (12) : 3526 - 3537
  • [6] Energy-Efficient Design for Massive MIMO With Hardware Impairments
    Liu, Zhihui
    Lee, Chia-Han
    Xu, Wenjun
    Li, Shengyu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (02) : 843 - 857
  • [7] Execution Time Prediction for Energy-Efficient Hardware Accelerators
    Chen, Tao
    Rucker, Alexander
    Suh, G. Edward
    PROCEEDINGS OF THE 48TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO-48), 2015, : 457 - 469
  • [8] Energy-Efficient Hardware Architectures for Fast Polar Decoders
    Ercan, Furkan
    Tonnellier, Thibaud
    Gross, Warren J.
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (01) : 322 - 335
  • [9] MAHA: An Energy-Efficient Malleable Hardware Accelerator for Data-Intensive Applications
    Paul, Somnath
    Krishna, Aswin
    Qian, Wenchao
    Karam, Robert
    Bhunia, Swarup
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2015, 23 (06) : 1005 - 1016
  • [10] 55 nm CMOS Mixed-Signal Neuromorphic Circuits for Constructing Energy-Efficient Reconfigurable SNNs
    Quan, Jiale
    Liu, Zhen
    Li, Bo
    Zeng, Chuanbin
    Luo, Jiajun
    ELECTRONICS, 2023, 12 (19)