Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures

被引:298
作者
Lee, Chankyu [1 ]
Sarwar, Syed Shakib [1 ]
Panda, Priyadarshini [1 ]
Srinivasan, Gopalakrishnan [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, Nanoelect Res Lab, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
spiking neural network; convolutional neural network; spike-based learning rule; gradient descent backpropagation; leaky integrate and fire neuron; MODEL;
D O I
10.3389/fnins.2020.00119
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.
引用
收藏
页数:22
相关论文
共 62 条
[21]   Loihi: A Neuromorphic Manycore Processor with On-Chip Learning [J].
Davies, Mike ;
Srinivasa, Narayan ;
Lin, Tsung-Han ;
Chinya, Gautham ;
Cao, Yongqiang ;
Choday, Sri Harsha ;
Dimou, Georgios ;
Joshi, Prasad ;
Imam, Nabil ;
Jain, Shweta ;
Liao, Yuyun ;
Lin, Chit-Kwan ;
Lines, Andrew ;
Liu, Ruokun ;
Mathaikutty, Deepak ;
Mccoy, Steve ;
Paul, Arnab ;
Tse, Jonathan ;
Venkataramanan, Guruguhanathan ;
Weng, Yi-Hsin ;
Wild, Andreas ;
Yang, Yoonseok ;
Wang, Hong .
IEEE MICRO, 2018, 38 (01) :82-99
[22]   Unsupervised learning of digit recognition using spike-timing-dependent plasticity [J].
Diehl, Peter U. ;
Cook, Matthew .
FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2015, 9
[23]   Overview of the SpiNNaker System Architecture [J].
Furber, Steve B. ;
Lester, David R. ;
Plana, Luis A. ;
Garside, Jim D. ;
Painkras, Eustace ;
Temple, Steve ;
Brown, Andrew D. .
IEEE TRANSACTIONS ON COMPUTERS, 2013, 62 (12) :2454-2467
[24]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[25]   CURRENTS CARRIED BY SODIUM AND POTASSIUM IONS THROUGH THE MEMBRANE OF THE GIANT AXON OF LOLIGO [J].
HODGKIN, AL ;
HUXLEY, AF .
JOURNAL OF PHYSIOLOGY-LONDON, 1952, 116 (04) :449-472
[26]  
Huang XW, 2015, ACTA POLYM SIN, P1133
[27]   Simple model of spiking neurons [J].
Izhikevich, EM .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (06) :1569-1572
[28]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034
[29]   A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning [J].
Kappel, David ;
Legenstein, Robert ;
Habenschuss, Stefan ;
Hsieh, Michael ;
Maass, Wolfgang .
ENEURO, 2018, 5 (02)
[30]   Network Plasticity as Bayesian Inference [J].
Kappel, David ;
Habenschuss, Stefan ;
Legenstein, Robert ;
Maass, Wolfgang .
PLOS COMPUTATIONAL BIOLOGY, 2015, 11 (11)