An efficient automated parameter tuning framework for spiking neural networks

被引:50
作者
Carlson, Kristofor D. [1 ]
Nageswaran, Jayram Moorkanikara [2 ]
Dutt, Nikil [3 ]
Krichmar, Jeffrey L. [1 ,3 ]
机构
[1] Univ Calif Irvine, Dept Cognit Sci, Irvine, CA 92697 USA
[2] Brain Corp, San Diego, CA USA
[3] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
关键词
spiking neural networks; parameter tuning; evolutionary algorithms; GPU programming; self-organizing receptive fields; STDP; LARGE-SCALE MODEL; SYNAPTIC PLASTICITY; CEREBELLUM; SIMULATION; EVOLUTION;
D O I
10.3389/fnins.2014.00010
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65x of the GPU implementation over the CPU implementation, or 0.35h per generation for GPU vs. 23.5h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
引用
收藏
页数:15
相关论文
共 88 条
[1]   Synaptic plasticity: taming the beast [J].
Abbott, L. F. ;
Nelson, Sacha B. .
NATURE NEUROSCIENCE, 2000, 3 (11) :1178-1183
[2]  
Amir A, 2013, IEEE IJCNN
[3]  
[Anonymous], NEUR NETW IJCNN 2012
[4]  
[Anonymous], 2011, P 19 IR C EL ENG TEH
[5]  
[Anonymous], 2010, Parallel Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on
[6]  
[Anonymous], BMC NEUROSCI
[7]  
[Anonymous], P 20 EUR C ART LIF
[8]  
[Anonymous], 2010, INT JT C NEUR NETW
[9]  
[Anonymous], P 2010 INT JOINT C N
[10]  
[Anonymous], P INT C PAR DISTR TE