Desire backpropagation: A lightweight training algorithm for multi-layer spiking neural networks based on spike-timing-dependent plasticity

被引:2
作者
Gerlinghoff, Daniel [1 ]
Luo, Tao [1 ]
Goh, Rick Siow Mong [1 ]
Wong, Weng-Fai [2 ]
机构
[1] ASTAR, Inst High Performance Comp IHPC, 1 Fusionopolis Way,16-16 Connexis, Singapore 138632, Singapore
[2] Natl Univ Singapore, Dept Comp Sci, Comp 1, 13 Comp Dr, Singapore 117417, Singapore
关键词
Spiking neural network; Spike-timing-dependent plasticity; Supervised learning; ARCHITECTURE; RESUME;
D O I
10.1016/j.neucom.2023.126773
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks when resource efficiency and computational complexity are of importance. A major advantage of SNNs is their binary information transfer through spike trains which eliminates multiplication operations. The training of SNNs has, however, been a challenge, since neuron models are non-differentiable and traditional gradient -based backpropagation algorithms cannot be applied directly. Furthermore, spike-timing-dependent plasticity (STDP), albeit being a spike-based learning rule, updates weights locally and does not optimize for the output error of the network. We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones, from the output error. By incorporating this desire value into the local STDP weight update, we can efficiently capture the neuron dynamics while minimizing the global error and attaining a high classification accuracy. That makes desire backpropagation a spike-based supervised learning rule. We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively. In addition, by eliminating a multiplication during the backward pass, we reduce computational complexity and balance arithmetic resources between forward and backward pass, making desire backpropagation a candidate for training on low-resource devices.
引用
收藏
页数:10
相关论文
共 76 条
[1]  
[Anonymous], 2016, P 31 ANN ACM S APPL
[2]   DeepFire2: A Convolutional Spiking Neural Network Accelerator on FPGAs [J].
Aung, Myat Thu Linn ;
Gerlinghoff, Daniel ;
Qu, Chuping ;
Yang, Liwei ;
Huang, Tian ;
Goh, Rick Siow Mong ;
Luo, Tao ;
Wong, Weng-Fai .
IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (10) :2847-2857
[3]   DeepFire: Acceleration of Convolutional Spiking Neural Network on Modern Field Programmable Gate Arrays [J].
Aung, Myat Thu Linn ;
Qu, Chuping ;
Yang, Liwei ;
Luo, Tao ;
Goh, Rick Siow Mong ;
Wong, Weng-Fai .
2021 31ST INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL 2021), 2021, :28-32
[4]   A solution to the learning dilemma for recurrent networks of spiking neurons [J].
Bellec, Guillaume ;
Scherr, Franz ;
Subramoney, Anand ;
Hajek, Elias ;
Salaj, Darjan ;
Legenstein, Robert ;
Maass, Wolfgang .
NATURE COMMUNICATIONS, 2020, 11 (01)
[5]   Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware [J].
Blouw, Peter ;
Choo, Xuan ;
Hunsberger, Eric ;
Eliasmith, Chris .
PROCEEDINGS OF THE 2019 7TH ANNUAL NEURO-INSPIRED COMPUTATIONAL ELEMENTS WORKSHOP (NICE 2019), 2020,
[6]  
Bohte S. M., 2000, 8th European Symposium on Artificial Neural Networks. ESANN"2000. Proceedings, P419
[7]   Information theory and neural coding [J].
Borst, A ;
Theunissen, FE .
NATURE NEUROSCIENCE, 1999, 2 (11) :947-957
[8]   Spike timing-dependent plasticity: A Hebbian learning rule [J].
Caporale, Natalia ;
Dan, Yang .
ANNUAL REVIEW OF NEUROSCIENCE, 2008, 31 :25-46
[9]  
Cassidy AS, 2013, IEEE IJCNN
[10]  
Thiele JC, 2019, Arxiv, DOI arXiv:1906.00851