Spike-based computation using classical recurrent neural networks

被引:1
作者
De Geeter, Florent [1 ]
Ernst, Damien [1 ,2 ]
Drion, Guillaume [1 ]
机构
[1] Univ Liege, Montefiore Inst, Liege, Belgium
[2] Inst Polytech Paris, LTCI, Telecom Paris, Paris, France
来源
NEUROMORPHIC COMPUTING AND ENGINEERING | 2024年 / 4卷 / 02期
关键词
spiking neural networks; backpropagation; deep neural networks; recurrent neural networks;
D O I
10.1088/2634-4386/ad473b
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Spiking neural networks (SNNs) are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore drastically decrease energy consumption when run on specialized hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art SNNs are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about SNNs focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network (RNN) to make it event-based. This new RNN cell, called the spiking recurrent cell, therefore communicates using events, i.e. spikes, while being completely differen-tiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable.
引用
收藏
页数:14
相关论文
共 36 条
  • [1] Bohte SM., 2000, EUR S ART NEUR NETW
  • [2] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Cao, Yongqiang
    Chen, Yang
    Khosla, Deepak
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 54 - 66
  • [3] Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems
    Chicca, Elisabetta
    Stefanini, Fabio
    Bartolozzi, Chiara
    Indiveri, Giacomo
    [J]. PROCEEDINGS OF THE IEEE, 2014, 102 (09) : 1367 - 1388
  • [4] Cho K., 2014, C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG C EMP METH NAT LANG, P1724
  • [5] Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
    Davies, Mike
    Srinivasa, Narayan
    Lin, Tsung-Han
    Chinya, Gautham
    Cao, Yongqiang
    Choday, Sri Harsha
    Dimou, Georgios
    Joshi, Prasad
    Imam, Nabil
    Jain, Shweta
    Liao, Yuyun
    Lin, Chit-Kwan
    Lines, Andrew
    Liu, Ruokun
    Mathaikutty, Deepak
    Mccoy, Steve
    Paul, Arnab
    Tse, Jonathan
    Venkataramanan, Guruguhanathan
    Weng, Yi-Hsin
    Wild, Andreas
    Yang, Yoonseok
    Wang, Hong
    [J]. IEEE MICRO, 2018, 38 (01) : 82 - 99
  • [6] Derick N., 2018, International Journal of Engineering Research & Technology, V12, P4, DOI DOI 10.17577/IJERTV12IS040268
  • [7] Diehl PU, 2015, IEEE IJCNN
  • [8] Unsupervised learning of digit recognition using spike-timing-dependent plasticity
    Diehl, Peter U.
    Cook, Matthew
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2015, 9
  • [9] Eshraghian JK, 2023, Arxiv, DOI arXiv:2109.12894
  • [10] Convolutional networks for fast, energy-efficient neuromorphic computing
    Esser, Steven K.
    Merolla, Paul A.
    Arthur, John V.
    Cassidy, Andrew S.
    Appuswamy, Rathinakumar
    Andreopoulos, Alexander
    Berg, David J.
    McKinstry, Jeffrey L.
    Melano, Timothy
    Barch, Davis R.
    di Nolfo, Carmelo
    Datta, Pallab
    Amir, Arnon
    Taba, Brian
    Flickner, Myron D.
    Modha, Dharmendra S.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (41) : 11441 - 11446