RMPE:Reducing Residual Membrane Potential Error for Enabling High-Accuracy and Ultra-low-latency Spiking Neural Networks

被引:0
|
作者
Chen, Yunhua [1 ]
Xiong, Zhimin [1 ]
Feng, Ren [1 ]
Chen, Pinghua [1 ]
Xiao, Jinsheng [2 ]
机构
[1] Guangdong Univ Technol, Sch Comp, Guangzhou, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan, Peoples R China
来源
NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III | 2024年 / 14449卷
关键词
Spike Neural Networks; Rate Coding; ANN-SNN Conversion;
D O I
10.1007/978-981-99-8067-3_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) have attracted great attention due to their distinctive properties of low power consumption and high computing efficiency on neuromorphic hardware. An effective way to obtain deep SNNs with competitive accuracy on large-scale datasets is ANN-SNN conversion. However, it requires a long time window to get an optimal mapping between the firing rates of SNNs and the activation of ANNs due to conversion error. Compared with the source ANN, the converted SNN usually suffers a huge loss of accuracy at ultralow latency. In this paper, we first analyze the residual membrane potential error caused by the asynchronous transmission property of spikes at ultra-low latency, and we deduce an explicit expression for the residual membrane potential error (RMPE) and the SNN parameters. Then we propose a layer-by-layer calibration algorithm for these SNN parameters to eliminate RMPE. Finally, a two-stage ANN-SNN conversion scheme is proposed to eliminate the quantization error, the truncation error, and the RMPE separately. We evaluate our method on CIRFARs and ImageNet, and the experimental results show that the proposed ANNSNN conversion method has a significant reduction in accuracy loss at ultra-low-latency. When T is <= 64, our method requires about half the latency of other methods of similar accuracy on ImageNet. The code is available at https://github. com/JominWink/SNN Conversion Phase.
引用
收藏
页码:81 / 93
页数:13
相关论文
共 24 条
  • [1] Highway Connection for Low-Latency and High-Accuracy Spiking Neural Networks
    Zhang, Anguo
    Wu, Junyi
    Li, Xiumin
    Li, Hung Chun
    Gao, Yueming
    Pun, Sio Hang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (12) : 4579 - 4583
  • [2] Direct Training via Backpropagation for Ultra-Low-Latency Spiking Neural Networks with Multi-Threshold
    Xu, Changqing
    Liu, Yi
    Chen, Dongdong
    Yang, Yintang
    SYMMETRY-BASEL, 2022, 14 (09):
  • [3] Toward High-Accuracy and Low-Latency Spiking Neural Networks With Two-Stage Optimization
    Wang, Ziming
    Zhang, Yuhao
    Lian, Shuang
    Cui, Xiaoxin
    Yan, Rui
    Tang, Huajin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 15
  • [4] Spatio-Temporal Pruning for Training Ultra-Low-Latency Spiking Neural Networks in Remote Sensing Scene Classification
    Li, Jiahao
    Xu, Ming
    Chen, He
    Liu, Wenchao
    Chen, Liang
    Xie, Yizhuang
    REMOTE SENSING, 2024, 16 (17)
  • [5] OneSpike: Ultra-low latency spiking neural networks
    Tang, Kaiwen
    Yan, Zhanglu
    Wong, Weng-Fai
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [6] Ultra-Low-Latency Distributed Deep Neural Network over Hierarchical Mobile Networks
    Chang, Jen-I
    Kuo, Jian-Jhih
    Lin, Chi-Han
    Chen, Wen-Tsuen
    Sheu, Jang-Ping
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [7] Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking Neural Networks?
    Datta, Gourav
    Beerel, Peter A.
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 718 - 723
  • [8] An Improved STBP for Training High-Accuracy and Low-Spike-Count Spiking Neural Networks
    Tan, Pai-Yu
    Wu, Cheng-Wen
    Lu, Juin-Ming
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 575 - 580
  • [9] Optimized Potential Initialization for Low-Latency Spiking Neural Networks
    Bu, Tong
    Ding, Jianhao
    Yu, Zhaofei
    Huang, Tiejun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11 - 20
  • [10] Spiking neural networks with consistent mapping relations allow high-accuracy inference
    Li, Yang
    He, Xiang
    Kong, Qingqun
    Zeng, Yi
    INFORMATION SCIENCES, 2024, 677