Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks

被引:3
作者
Gardner, Brian [1 ]
Gruening, Andre [2 ]
机构
[1] Univ Surrey, Dept Comp Sci, Guildford, Surrey, England
[2] Univ Appl Sci, Fac Elect Engn & Comp Sci, Stralsund, Germany
关键词
spiking neural networks; multilayer SNN; supervised learning; backpropagation; temporal coding; classification; MNIST; BACKPROPAGATION; CLASSIFICATION; PLASTICITY; NEURONS; MODELS; RULE;
D O I
10.3389/fncom.2021.617862
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalizing from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed "scanline encoding," that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimized, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.
引用
收藏
页数:24
相关论文
共 59 条
[1]   Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity [J].
Albers, Christian ;
Westkott, Maren ;
Pawelzik, Klaus .
PLOS ONE, 2016, 11 (02)
[2]  
[Anonymous], 2012, Neural Networks Machine Learn.
[3]  
[Anonymous], 2002, Spiking neuron models: Single neurons, populations, plasticity, DOI DOI 10.1017/CBO9780511815706
[4]  
Bagheri A, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P2986, DOI 10.1109/ICASSP.2018.8462410
[5]   A solution to the learning dilemma for recurrent networks of spiking neurons [J].
Bellec, Guillaume ;
Scherr, Franz ;
Subramoney, Anand ;
Hajek, Elias ;
Salaj, Darjan ;
Legenstein, Robert ;
Maass, Wolfgang .
NATURE COMMUNICATIONS, 2020, 11 (01)
[6]   Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type [J].
Bi, GQ ;
Poo, MM .
JOURNAL OF NEUROSCIENCE, 1998, 18 (24) :10464-10472
[7]  
Bohte S. M, 2014, COMPUT INTELL-US
[8]   Error-backpropagation in temporally encoded networks of spiking neurons [J].
Bohte, SM ;
Kok, JN ;
La Poutré, H .
NEUROCOMPUTING, 2002, 48 :17-37
[9]   A gradient descent rule for spiking neurons emitting multiple spikes [J].
Booij, O ;
Nguyen, HT .
INFORMATION PROCESSING LETTERS, 2005, 95 (06) :552-558
[10]   Matching Recall and Storage in Sequence Learning with Spiking Neural Networks [J].
Brea, Johanni ;
Senn, Walter ;
Pfister, Jean-Pascal .
JOURNAL OF NEUROSCIENCE, 2013, 33 (23) :9565-9575