Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors

被引:19
作者
Buechel, Julian [1 ,2 ,3 ]
Zendrikov, Dmitrii [2 ,3 ]
Solinas, Sergio [4 ]
Indiveri, Giacomo [1 ,2 ,3 ]
Muir, Dylan R. [1 ]
机构
[1] SynSense, Thurgauerstr 40, CH-8050 Zurich, Switzerland
[2] Univ Zurich, Inst Neuroinformat, Winterthurerstr 190, CH-8057 Zurich, Switzerland
[3] Swiss Fed Inst Technol, Winterthurerstr 190, CH-8057 Zurich, Switzerland
[4] Univ Sassari, Dept Biomed Sci, Piazza Univ 21, I-07100 Sassari, Sardegna, Italy
基金
欧盟地平线“2020”;
关键词
ON-CHIP; EFFICIENT; SYSTEM; TIME; PLASTICITY; MISMATCH; BRAIN; POWER;
D O I
10.1038/s41598-021-02779-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pretrained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
引用
收藏
页数:12
相关论文
共 63 条
[1]   An Accelerated LIF Neuronal Network Array for a Large-Scale Mixed-Signal Neuromorphic Architecture [J].
Aamir, Syed Ahmed ;
Stradmann, Yannik ;
Mueller, Paul ;
Pehle, Christian ;
Hartel, Andreas ;
Gruebl, Andreas ;
Schemmel, Johannes ;
Meier, Karlheinz .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (12) :4299-4312
[2]  
Alemi A, 2018, AAAI CONF ARTIF INTE, P588
[3]  
[Anonymous], 2019, ARXIV190109948
[4]   Optimal compensation for neuron loss [J].
Barrett, David G. T. ;
Deneve, Sophie ;
Machens, Christian K. .
ELIFE, 2016, 5
[5]   A solution to the learning dilemma for recurrent networks of spiking neurons [J].
Bellec, Guillaume ;
Scherr, Franz ;
Subramoney, Anand ;
Hajek, Elias ;
Salaj, Darjan ;
Legenstein, Robert ;
Maass, Wolfgang .
NATURE COMMUNICATIONS, 2020, 11 (01)
[6]   Predictive Coding of Dynamical Variables in Balanced Spiking Networks [J].
Boerlin, Martin ;
Machens, Christian K. ;
Deneve, Sophie .
PLOS COMPUTATIONAL BIOLOGY, 2013, 9 (11)
[7]  
Bourdoukan R., 2012, Advances in Neural Information Processing Systems, V2, P2285
[8]  
Bradbury J., 2018, JAX: composable transformations of Python+NumPy programs
[9]   Learning real-world stimuli in a neural network with spike-driven synaptic dynamics [J].
Brader, Joseph M. ;
Senn, Walter ;
Fusi, Stefano .
NEURAL COMPUTATION, 2007, 19 (11) :2881-2912
[10]   Learning to represent signals spike by spike [J].
Brendel, Wieland ;
Bourdoukan, Ralph ;
Vertechi, Pietro ;
Machens, Christian K. ;
Deneve, Sophie .
PLOS COMPUTATIONAL BIOLOGY, 2020, 16 (03)