Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks

被引:776
作者
Wu, Yujie [1 ]
Deng, Lei [1 ,2 ]
Li, Guoqi [1 ]
Zhu, Jun [3 ]
Shi, Luping [1 ]
机构
[1] Tsinghua Univ, Dept Precis Instrument, Ctr Brain Inspired Comp Res, Beijing Innovat Ctr Future Chip, Beijing, Peoples R China
[2] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[3] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, State Key Lab Intelligence Technol & Syst, Beijing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
spiking neural network (SNN); spatio-temporal recognition; leaky integrate-and-fire neuron; MNIST-DVS; MNIST; backpropagation; convolutional neural networks (CNN); CLASSIFICATION;
D O I
10.3389/fnins.2018.00331
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics.
引用
收藏
页数:12
相关论文
共 43 条
[1]  
Allen JN, 2009, NAECON: PROCEEDINGS OF THE IEEE 2009 NATIONAL AEROSPACE & ELECTRONICS CONFERENCE, P56, DOI 10.1109/NAECON.2009.5426652
[2]  
[Anonymous], 1 SPIKE BASED VISUAL
[3]  
Bengio Y., 2015, COMPUT SCI
[4]   Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations [J].
Benjamin, Ben Varkey ;
Gao, Peiran ;
McQuinn, Emmett ;
Choudhary, Swadesh ;
Chandrasekaran, Anand R. ;
Bussat, Jean-Marie ;
Alvarez-Icaza, Rodrigo ;
Arthur, John V. ;
Merolla, Paul A. ;
Boahen, Kwabena .
PROCEEDINGS OF THE IEEE, 2014, 102 (05) :699-716
[5]  
Bohte S. M., 2000, 8th European Symposium on Artificial Neural Networks. ESANN"2000. Proceedings, P419
[6]  
Chung JY, 2015, PR MACH LEARN RES, V37, P2067
[7]   Skimming Digits: Neuromorphic Classification of Spike-Encoded Images [J].
Cohen, Gregory K. ;
Orchard, Garrick ;
Leng, Sio-Hoi ;
Tapson, Jonathan ;
Benosman, Ryad B. ;
van Schaik, Andre .
FRONTIERS IN NEUROSCIENCE, 2016, 10
[8]   AUTOMATIC RECOGNITION OF SPOKEN DIGITS [J].
DAVIS, KH ;
BIDDULPH, R ;
BALASHEK, S .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1952, 24 (06) :637-642
[9]   Unsupervised learning of digit recognition using spike-timing-dependent plasticity [J].
Diehl, Peter U. ;
Cook, Matthew .
FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2015, 9
[10]  
Diehl Peter U, 2015, 2015 INT JOINT C NEU, P1, DOI DOI 10.1109/IJCNN.2015.7280696