Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

被引:0
|
作者
Diehl, Peter U. [1 ,2 ]
Zarrella, Guido [3 ]
Cassidy, Andrew [4 ]
Pedroni, Bruno U. [5 ]
Neftci, Emre [5 ,6 ]
机构
[1] Swiss Fed Inst Technol, Inst Neuroinformat, Zurich, Switzerland
[2] Univ Zurich, CH-8006 Zurich, Switzerland
[3] Mitre Corp, Burlington Rd, Bedford, MA 01730 USA
[4] IBM Res Almaden, San Jose, CA USA
[5] Univ Calif San Diego, Inst Neural Computat, La Jolla, CA USA
[6] UC Irvine, Dept Cognit Sci, Irvine, CA USA
关键词
DEEP;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of approximate to 17 mu W
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Neuro-Evolution of Spiking Neural Networks on SpiNNaker Neuromorphic Hardware
    Vandesompele, Alexander
    Walter, Florian
    Roehrbein, Florian
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [22] In-Hardware Learning of Multilayer Spiking Neural Networks on a Neuromorphic Processor
    Shrestha, Amar
    Fang, Haowen
    Rider, Daniel Patrick
    Mei, Zaidao
    Qiu, Qinru
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 367 - 372
  • [23] Spiking recurrent neural networks for neuromorphic computing in nonlinear structural mechanics
    Tandale, Saurabh Balkrishna
    Stoffel, Marcus
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2023, 412
  • [24] Low-Power Appliance Recognition Using Recurrent Neural Networks
    Pratama, Azkario R.
    Simanjuntak, Frans J.
    Lazovik, Alexander
    Aiello, Marco
    APPLICATIONS OF INTELLIGENT SYSTEMS, 2018, 310 : 239 - 250
  • [25] Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks
    Shen, Juncheng
    Ma, De
    Gu, Zonghua
    Zhang, Ming
    Zhu, Xiaolei
    Xu, Xiaoqiang
    Xu, Qi
    Shen, Yangjing
    Pan, Gang
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (02) : 1 - 5
  • [26] Autocorrelations from emergent bistability in homeostatic spiking neural networks on neuromorphic hardware
    Cramer, Benjamin
    Kreft, Markus
    Billaudelle, Sebastian
    Karasenko, Vitali
    Leibfried, Aron
    Mueller, Eric
    Spilger, Philipp
    Weis, Johannes
    Schemmel, Johannes
    Munoz, Miguel A.
    Priesemann, Viola
    Zierenberg, Johannes
    PHYSICAL REVIEW RESEARCH, 2023, 5 (03):
  • [27] Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems
    Lemaire, Edgar
    Miramond, Benoit
    Bilavarn, Sebastien
    Saoud, Hadi
    Abderrahmane, Nassim
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [28] DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to Neuromorphic Hardware
    Song, Shihao
    Chong, Harry
    Balaji, Adarsha
    Das, Anup
    Shackleford, James
    Kandasamy, Nagarajan
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (03)
  • [29] Darwin: A neuromorphic hardware co-processor based on spiking neural networks
    Ma, De
    Shen, Juncheng
    Gu, Zonghua
    Zhang, Ming
    Zhu, Xiaolei
    Xu, Xiaoqiang
    Xu, Qi
    Shen, Yangjing
    Pan, Gang
    JOURNAL OF SYSTEMS ARCHITECTURE, 2017, 77 : 43 - 51
  • [30] Darwin:a neuromorphic hardware co-processor based on Spiking Neural Networks
    Juncheng SHEN
    De MA
    Zonghua GU
    Ming ZHANG
    Xiaolei ZHU
    Xiaoqiang XU
    Qi XU
    Yangjing SHEN
    Gang PAN
    Science China(Information Sciences), 2016, 59 (02) : 232 - 236