Toward High-Accuracy and Low-Latency Spiking Neural Networks With Two-Stage Optimization

被引:9
|
作者
Wang, Ziming [1 ]
Zhang, Yuhao [2 ]
Lian, Shuang [1 ]
Cui, Xiaoxin [3 ]
Yan, Rui [4 ]
Tang, Huajin [4 ,5 ]
机构
[1] Zhejiang Univ City Coll, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Zhejiang Lab, Res Ctr Intelligent Comp Hardware, Hangzhou 311100, Peoples R China
[3] Peking Univ, Sch Integrated Circuits, Beijing 100871, Peoples R China
[4] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou 310014, Peoples R China
[5] Zhejiang Univ, State Key Lab Brain Machine Intelligence, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial neural network (ANN)-spiking neural network (SNN) conversion; deep SNNs; neuromorphic computing; residual membrane potential; SNN; spike-based object detection; two-stage optimization;
D O I
10.1109/TNNLS.2023.3337176
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) operating with asynchronous discrete events show higher energy efficiency with sparse computation. A popular approach for implementing deep SNNs is artificial neural network (ANN)-SNN conversion combining both efficient training of ANNs and efficient inference of SNNs. However, the accuracy loss is usually nonnegligible, especially under few time steps, which restricts the applications of SNN on latency-sensitive edge devices greatly. In this article, we first identify that such performance degradation stems from the misrepresentation of the negative or overflow residual membrane potential in SNNs. Inspired by this, we decompose the conversion error into three parts: quantization error, clipping error, and residual membrane potential representation error. With such insights, we propose a two-stage conversion algorithm to minimize those errors, respectively. In addition, we show that each stage achieves significant performance gains in a complementary manner. By evaluating on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet, the proposed method demonstrates the state-of-the-art performance in terms of accuracy, latency, and energy preservation. Furthermore, our method is evaluated using a more challenging object detection task, revealing notable gains in regression performance under ultralow latency, when compared with existing spike-based detection algorithms.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [1] Highway Connection for Low-Latency and High-Accuracy Spiking Neural Networks
    Zhang, Anguo
    Wu, Junyi
    Li, Xiumin
    Li, Hung Chun
    Gao, Yueming
    Pun, Sio Hang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (12) : 4579 - 4583
  • [2] High-accuracy and Low-latency Hybrid Stochastic Computing for Artificial Neural Network
    Chen, Kun-Chih
    Chen, Cheng-Ting
    18TH INTERNATIONAL SOC DESIGN CONFERENCE 2021 (ISOCC 2021), 2021, : 254 - 255
  • [3] Optimized Potential Initialization for Low-Latency Spiking Neural Networks
    Bu, Tong
    Ding, Jianhao
    Yu, Zhaofei
    Huang, Tiejun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11 - 20
  • [4] Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking Neural Networks?
    Datta, Gourav
    Beerel, Peter A.
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 718 - 723
  • [5] Constrain Bias Addition to Train Low-Latency Spiking Neural Networks
    Lin, Ranxi
    Dai, Benzhe
    Zhao, Yingkai
    Chen, Gang
    Lu, Huaxiang
    BRAIN SCIENCES, 2023, 13 (02)
  • [6] Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation
    Meng, Qingyan
    Xiao, Mingqing
    Yan, Shen
    Wang, Yisen
    Lin, Zhouchen
    Luo, Zhi-Quan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12434 - 12443
  • [7] High-Accuracy and Low-Latency Tracker for UAVs Monitoring Tibetan Antelopes
    Luo, Wei
    Li, Xiaofang
    Zhang, Guoqing
    Shao, Quanqin
    Zhao, Yongxiang
    Li, Denghua
    Zhao, Yunfeng
    Li, Xuqing
    Zhao, Zihui
    Liu, Yuyan
    Li, Xiaoliang
    REMOTE SENSING, 2023, 15 (02)
  • [8] RMPE:Reducing Residual Membrane Potential Error for Enabling High-Accuracy and Ultra-low-latency Spiking Neural Networks
    Chen, Yunhua
    Xiong, Zhimin
    Feng, Ren
    Chen, Pinghua
    Xiao, Jinsheng
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 : 81 - 93
  • [9] Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
    Chowdhury, Sayeed Shafayet
    Garg, Isha
    Roy, Kaushik
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] Training Low-Latency Spiking Neural Network with Orthogonal Spiking Neurons
    Yao, Yunpeng
    Wu, Man
    Zhang, Renyuan
    2023 21ST IEEE INTERREGIONAL NEWCAS CONFERENCE, NEWCAS, 2023,