An adaptive threshold mechanism for accurate and efficient deep spiking convolutional neural networks

被引:28
作者
Chen, Yunhua [1 ]
Mai, Yingchao [1 ]
Feng, Ren [1 ]
Xiao, Jinsheng [2 ]
机构
[1] Guangdong Univ Technol, Sch Comp, Guangzhou, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan, Peoples R China
关键词
Spiking convolutional neural networks; Conversion from CNN to SNN; Approximation error; Ratio-of-threshold-to-weights;
D O I
10.1016/j.neucom.2021.10.080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks(SNNs) can potentially offer an efficient way of performing inference because the neurons in the networks are sparsely activated and computations are event-driven. SNNs with higher accuracy can be obtained by converting deep convolutional neural networks(CNNs) into spiking CNNs. However, there is always a performance loss between CNN and its spiking equivalents, because approximation error occurs in the conversion from the continuous-valued CNNs to the sparsely firing, event-driven SNNs. In this paper, the differences between analog neurons and spiking neurons in neuron models and activities are analyzed, the impact of the balance between weight and threshold on the approximation error is clarified, and an adaptive threshold mechanism for improved balance between weight and threshold of SNNs is proposed. In this method, the threshold can be dynamically adjusted adapting to the input data, which makes it possible to obtain as small a threshold as possible while distinguishing inputs, so as to generate sufficient firing to drive higher layers and consequently can achieve better classification. The SNN with the adaptive threshold mechanism outperforms most of the recently proposed SNNs on CIFAR10 in terms of accuracy, accuracy loss and network latency, and achieved state-of-the-art results on CIFAR100. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:189 / 197
页数:9
相关论文
共 33 条
[1]   Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing-Application to Feedforward ConvNets [J].
Antonio Perez-Carrasco, Jose ;
Zhao, Bo ;
Serrano, Carmen ;
Acha, Begona ;
Serrano-Gotarredona, Teresa ;
Chen, Shouchun ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2706-2719
[2]   Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition [J].
Cao, Yongqiang ;
Chen, Yang ;
Khosla, Deepak .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) :54-66
[3]  
Cassidy A.S., 2013, VERSATILE EFFICIENT, P1
[4]   One-stage posterior approach for treating multilevel noncontiguous thoracic and lumbar spinal tuberculosis [J].
Chen, Rui-song ;
Liao, Xin ;
Xiong, Mo-liang ;
Chen, Feng-rong ;
Wang, Bo-wen ;
Huang, Jian-ming ;
Chen, Xiao-lin ;
Yin, Gang-hui ;
Liu, Hao-yuan ;
Jin, Da-di .
POSTGRADUATE MEDICINE, 2019, 131 (01) :73-77
[5]   Improving the Antinoise Ability of DNNs via a Bio-Inspired Noise Adaptive Activation Function Rand Softplus [J].
Chen, Yunhua ;
Mai, Yingchao ;
Xiao, Jinsheng ;
Zhang, Ling .
NEURAL COMPUTATION, 2019, 31 (06) :1215-1233
[6]  
Diehl PU, 2015, IEEE IJCNN
[7]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034
[8]  
Hu Y., ARXIVABS180501352
[9]  
Hunsberger E., ARXIV PREPRINT ARXIV
[10]  
Jin YYZ, 2018, ADV NEUR IN, V31