Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity

被引:34
作者
Demin, Vyacheslav [1 ,2 ]
Nekhaev, Dmitry [1 ]
机构
[1] Natl Res Ctr, Kurchatov Inst, Moscow, Russia
[2] Moscow Inst Phyc & Technol, Dolgoprudnyi, Russia
基金
俄罗斯科学基金会;
关键词
spiking neural networks; unsupervised learning; supervised learning; digits recognition; classification; neuron clustering; PROGRAMMED CELL-DEATH; DEPENDENT PLASTICITY; MODEL; STDP;
D O I
10.3389/fninf.2018.00079
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as "Family-Engaged Execution and Learning of Induced Neuron Groups," or FEELING.
引用
收藏
页数:13
相关论文
共 63 条
[1]   Excess of neurons in the human newborn mediodorsal thalamus compared with that of the adult [J].
Abitz, Maja ;
Nielsen, Rune Damgaard ;
Jones, Edward G. ;
Laursen, Henning ;
Graem, Niels ;
Pakkenberg, Berne .
CEREBRAL CORTEX, 2007, 17 (11) :2573-2578
[2]  
[Anonymous], 2013, INT C MACHINE LEARNI
[3]  
[Anonymous], CORR
[4]  
[Anonymous], 2012, MACH LEARN
[5]   Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task [J].
Bakkum, Douglas J. ;
Chao, Zenas C. ;
Potter, Steve M. .
JOURNAL OF NEURAL ENGINEERING, 2008, 5 (03) :310-323
[6]  
Barlow h., 1961, SENS COMMUN, V13, P217
[7]   THEORY FOR THE DEVELOPMENT OF NEURON SELECTIVITY - ORIENTATION SPECIFICITY AND BINOCULAR INTERACTION IN VISUAL-CORTEX [J].
BIENENSTOCK, EL ;
COOPER, LN ;
MUNRO, PW .
JOURNAL OF NEUROSCIENCE, 1982, 2 (01) :32-48
[8]  
Bottou L., 1998, On-line learning in neural networks, P9
[9]   A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input [J].
Burkitt, A. N. .
BIOLOGICAL CYBERNETICS, 2006, 95 (01) :1-19
[10]   Neuronal regulation: A mechanism for synaptic pruning during brain maturation [J].
Chechik, G ;
Meilijson, I ;
Ruppin, E .
NEURAL COMPUTATION, 1999, 11 (08) :2061-2080