Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

被引:0
|
作者
Diehl, Peter U. [1 ,2 ]
Zarrella, Guido [3 ]
Cassidy, Andrew [4 ]
Pedroni, Bruno U. [5 ]
Neftci, Emre [5 ,6 ]
机构
[1] Swiss Fed Inst Technol, Inst Neuroinformat, Zurich, Switzerland
[2] Univ Zurich, CH-8006 Zurich, Switzerland
[3] Mitre Corp, Burlington Rd, Bedford, MA 01730 USA
[4] IBM Res Almaden, San Jose, CA USA
[5] Univ Calif San Diego, Inst Neural Computat, La Jolla, CA USA
[6] UC Irvine, Dept Cognit Sci, Irvine, CA USA
关键词
DEEP;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of approximate to 17 mu W
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Compiling Spiking Neural Networks to Neuromorphic Hardware
    Song, Shihao
    Balaji, Adarsha
    Das, Anup
    Kandasamy, Nagarajan
    Shackleford, James
    21ST ACM SIGPLAN/SIGBED CONFERENCE ON LANGUAGES, COMPILERS, AND TOOLS FOR EMBEDDED SYSTEMS (LCTES '20), 2020, : 38 - 50
  • [2] Mapping Spiking Neural Networks to Neuromorphic Hardware
    Balaji, Adarsha
    Das, Anup
    Wu, Yuefeng
    Huynh, Khanh
    Dell'Anna, Francesco G.
    Indiveri, Giacomo
    Krichmar, Jeffrey L.
    Dutt, Nikil D.
    Schaafsma, Siebren
    Catthoor, Francky
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (01) : 76 - 86
  • [3] FEDERATED NEUROMORPHIC LEARNING OF SPIKING NEURAL NETWORKS FOR LOW-POWER EDGE INTELLIGENCE
    Skatchkovsky, Nicolas
    Fang, Hyeryung
    Simeone, Osvaldo
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8524 - 8528
  • [4] Biologically-inspired training of spiking recurrent neural networks with neuromorphic hardware
    Bohnstingl, Thomas
    Surina, Anja
    Fabre, Maxime
    Demirag, Yigit
    Frenkel, Charlotte
    Payvand, Melika
    Indiveri, Giacomo
    Pantazi, Angeliki
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 218 - 221
  • [5] Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks
    Javanshir, Amirhossein
    Thanh Thi Nguyen
    Mahmud, M. A. Parvez
    Kouzani, Abbas Z.
    NEURAL COMPUTATION, 2022, 34 (06) : 1289 - 1328
  • [6] Community detection with spiking neural networks for neuromorphic hardware
    Hamilton, Kathleen E.
    Imam, Neena
    Humble, Travis S.
    PROCEEDINGS OF NEUROMORPHIC COMPUTING SYMPOSIUM (NCS 2017), 2017,
  • [7] Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware
    Ostrau, Christoph
    Homburg, Jonas
    Klarhorst, Christian
    Thies, Michael
    Rueckert, Ulrich
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 610 - 621
  • [8] Neuromorphic Recurrent Spiking Neural Networks for EMG Gesture Classification and Low Power Implementation on Loihi
    Bezugam, Sai Sukruth
    Shaban, Ahmed
    Suri, Manan
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [9] Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware Constraints
    Balaji, Adarsha
    Das, Anup
    2020 11TH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING WORKSHOPS (IGSC), 2020,
  • [10] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saunak Saha
    Henry Duwe
    Joseph Zambreno
    Journal of Signal Processing Systems, 2020, 92 : 907 - 929