OneSpike: Ultra-low latency spiking neural networks

被引:0
|
作者
Tang, Kaiwen [1 ]
Yan, Zhanglu [1 ]
Wong, Weng-Fai [1 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
来源
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 | 2024年
关键词
Spiking Neural Networks; Ultra-low Latency; Energy Efficiency;
D O I
10.1109/IJCNN60899.2024.10651169
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of deep learning models, there has been growing research interest in spiking neural networks (SNNs) due to their energy efficiency resulting from their multiplier-less nature. The existing methodologies for SNN development include the conversion of artificial neural networks (ANNs) into equivalent SNNs or the emulation of ANNs, with two crucial challenges yet remaining. The first challenge involves preserving the accuracy of the original ANN models during the conversion to SNNs. The second challenge is to run complex SNNs with lower latencies. To solve the problem of high latency while maintaining high accuracy, we proposed a parallel spikegeneration (PSG) method to generate all the spikes in a single timestep, while achieving a better model performance than the standard Integrate-and-Fire model. Based on PSG, we propose OneSpike, a highly effective framework that helps to convert any rate-encoded convolutional SNN into one that uses only one timestep without accuracy loss. Our OneSpike model achieves a state-of-the-art (for SNN) accuracy of 81.92% on the ImageNet dataset using just a single time step. To the best of our knowledge, this study is the first to explore converting multi-timestep SNNs into equivalent single-timestep ones, while maintaining accuracy. These results highlight the potential of our approach in addressing the key challenges in SNN research, paving the way for more efficient and accurate SNNs in practical applications.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] CS-QCFS: Bridging the performance gap in ultra-low latency spiking neural networks
    Yang, Hongchao
    Yang, Suorong
    Zhang, Lingming
    Dou, Hui
    Shen, Furao
    Zhao, Jian
    NEURAL NETWORKS, 2025, 184
  • [2] Ultra-low latency spiking neural networks with spatio-temporal compression and synaptic convolutional block
    Xu, Changqing
    Liu, Yi
    Yang, Yintang
    NEUROCOMPUTING, 2023, 550
  • [3] Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking Neural Networks?
    Datta, Gourav
    Beerel, Peter A.
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 718 - 723
  • [4] ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator
    Pei, Yijian
    Xu, Changqing
    Wu, Zili
    Liu, Yi
    Yang, Yintang
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [5] Ultra-Low Energy LIF Neuron Using Si NIPIN Diode for Spiking Neural Networks
    Das, B.
    Schulze, J.
    Ganguly, U.
    IEEE ELECTRON DEVICE LETTERS, 2018, 39 (12) : 1832 - 1835
  • [6] Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning
    Chowdhury, Sayeed Shafayet
    Rathi, Nitin
    Roy, Kaushik
    COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 709 - 726
  • [7] Mapping Neural Networks to FPGA-Based IoT Devices for Ultra-Low Latency Processing
    Wielgosz, Maciej
    Karwatowski, Michal
    SENSORS, 2019, 19 (13)
  • [8] Intelligent Routing Orchestration for Ultra-Low Latency Transport Networks
    Meng, Qingmin
    Wei, Jingcheng
    Wang, Xiaoming
    Guo, Haiyan
    IEEE ACCESS, 2020, 8 : 128324 - 128336
  • [9] Optimized Potential Initialization for Low-Latency Spiking Neural Networks
    Bu, Tong
    Ding, Jianhao
    Yu, Zhaofei
    Huang, Tiejun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11 - 20
  • [10] Direct Training via Backpropagation for Ultra-Low-Latency Spiking Neural Networks with Multi-Threshold
    Xu, Changqing
    Liu, Yi
    Chen, Dongdong
    Yang, Yintang
    SYMMETRY-BASEL, 2022, 14 (09):