Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware

被引:0
作者
Adarsha Balaji
Thibaut Marty
Anup Das
Francky Catthoor
机构
[1] Drexel University,Neuromorphic Division
[2] ENS Rennes,undefined
[3] IMEC,undefined
来源
Journal of Signal Processing Systems | 2020年 / 92卷
关键词
Spiking Neural Networks (SNN); Neuromorphic computing; Internet of Things (IoT); Run-time; Mapping;
D O I
暂无
中图分类号
学科分类号
摘要
Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps – step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
引用
收藏
页码:1293 / 1302
页数:9
相关论文
共 100 条
[1]  
Akopyan F(2015)TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems 34 1537-1557
[2]  
Sawada J(2015)TrueNorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34 1537-1557
[3]  
Cassidy A(2015)Internet of things: a survey on enabling technologies, protocols, and applications IEEE Communications Surveys Tutorials 17 2347-2376
[4]  
Alvarez-Icaza R(2018)Power-Accuracy Trade-Offs for heartbeat classification on neural networks hardware Journal of Low Power Electronics 14 508-519
[5]  
Arthur J(2014)Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations Proceedings of the IEEE 102 699-716
[6]  
Merolla P(2012)A fast high-level event-driven thermal estimator for dynamic thermal aware scheduling IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 31 904-917
[7]  
Imam N(2018)Loihi: a neuromorphic manycore processor with on-chip learning IEEE Micro 38 82-99
[8]  
Nakamura Y(2011)The internet of things: How the next evolution of the internet is changing everything CISCO White Paper 1 1-11
[9]  
Datta P(2001)Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31 902-918
[10]  
Nam GJ(2016)Training deep spiking neural networks using backpropagation Frontiers in Neuroscience 10 508-1671