ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator

被引:2
作者
Pei, Yijian [1 ]
Xu, Changqing [1 ,2 ]
Wu, Zili [3 ]
Liu, Yi [2 ]
Yang, Yintang [2 ]
机构
[1] Xidian Univ, Guangzhou Inst Technol, Xian, Peoples R China
[2] Xidian Univ, Sch Microelect, Xian, Peoples R China
[3] Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
spiking neural networks; binary neural networks; neuromorphic computing; sparsity; visual recognition;
D O I
10.3389/fnins.2023.1225871
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking neural network (SNN) is a brain-inspired model with more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. In this study, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure a balance between quantization degree and classification accuracy by evaluating the error caused by the binarized weights during the network learning process. At the same time, to accelerate the training speed of the network, the global average pooling (GAP) layer is introduced to replace the fully connected layers by combining convolution and pooling. Finally, to further reduce the error caused by the binary weight, we propose binary weight optimization (BWO), which updates the overall weight by directly adjusting the binary weight. This method further reduces the loss of the network that reaches the training bottleneck. The combination of the above methods balances the network's quantization and recognition ability, enabling the network to maintain the recognition capability equivalent to the full precision network and reduce the storage space by more than 20%. So, SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only a one-time step, we still can achieve 93.39, 92.12, and 69.55% testing accuracy on three traditional static datasets, Fashion- MNIST, CIFAR-10, and CIFAR-100, respectively. At the same time, we evaluate our method on neuromorphic N-MNIST, CIFAR10-DVS, and IBM DVS128 Gesture datasets and achieve advanced accuracy in SNN with binary weights. Our network has greater advantages in terms of storage resources and training time.
引用
收藏
页数:12
相关论文
共 49 条
  • [1] A Low Power, Fully Event-Based Gesture Recognition System
    Amir, Arnon
    Taba, Brian
    Berg, David
    Melano, Timothy
    McKinstry, Jeffrey
    Di Nolfo, Carmelo
    Nayak, Tapan
    Andreopoulos, Alexander
    Garreau, Guillaume
    Mendoza, Marcela
    Kusnitz, Jeff
    Debole, Michael
    Esser, Steve
    Delbruck, Tobi
    Flickner, Myron
    Modha, Dharmendra
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7388 - 7397
  • [2] Bengio Y, 2013, Arxiv, DOI arXiv:1308.3432
  • [3] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Cao, Yongqiang
    Chen, Yang
    Khosla, Deepak
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 54 - 66
  • [4] Accuracy vs. Efficiency: Achieving both Through Hardware-Aware Quantization and Reconfigurable Architecture with Mixed Precision
    Chang, Libo
    Zhang, Shengbing
    Du, Huimin
    Wang, Shiyu
    Qiu, Meikang
    Wang, Jihe
    [J]. 19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 151 - 158
  • [5] Che KW, 2022, ADV NEUR IN
  • [6] Cheng X, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1519
  • [7] Courbariaux M., 2015, ICLR
  • [8] Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization
    Deng, Lei
    Wu, Yujie
    Hu, Yifan
    Liang, Ling
    Li, Guoqi
    Hu, Xing
    Ding, Yufei
    Li, Peng
    Xie, Yuan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (06) : 2791 - 2805
  • [9] Efficient training of spiking neural networks with temporally-truncated local backpropagation through time
    Guo, Wenzhe
    Fouda, Mohammed E.
    Eltawil, Ahmed M.
    Salama, Khaled Nabil
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [10] Efficient Hardware Implementation for Online Local Learning in Spiking Neural Networks
    Guo, Wenzhe
    Fouda, Mohammed E.
    Eltawil, Ahmed M.
    Salama, Khaled Nabil
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 387 - 390