Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients

被引:5
作者
Rosenfeld, Bleema [1 ]
Simeone, Osvaldo [2 ]
Rajendran, Bipin [1 ]
机构
[1] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[2] Kings Coll London, Dept Informat, Ctr Telecommun Res, London WC2R 2LS, England
来源
2019 IEEE 20TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC 2019) | 2019年
基金
欧洲研究理事会; 美国国家科学基金会;
关键词
Spiking Neural Network; Reinforcement Learning; Policy Gradient; Neuromorphic Computing;
D O I
10.1109/spawc.2019.8815546
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN.
引用
收藏
页数:5
相关论文
共 20 条
  • [1] [Anonymous], 2018, Advances in Neural Information Processing Systems
  • [2] [Anonymous], 2018, ARXIV180701281
  • [3] Bagheri Alireza, 2017, ARXIV171010704
  • [4] Bing ZS, 2018, IEEE INT CONF ROBOT, P4725
  • [5] Chen M., 2017, CORRABS171002913, P1
  • [6] Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
    Davies, Mike
    Srinivasa, Narayan
    Lin, Tsung-Han
    Chinya, Gautham
    Cao, Yongqiang
    Choday, Sri Harsha
    Dimou, Georgios
    Joshi, Prasad
    Imam, Nabil
    Jain, Shweta
    Liao, Yuyun
    Lin, Chit-Kwan
    Lines, Andrew
    Liu, Ruokun
    Mathaikutty, Deepak
    Mccoy, Steve
    Paul, Arnab
    Tse, Jonathan
    Venkataramanan, Guruguhanathan
    Weng, Yi-Hsin
    Wild, Andreas
    Yang, Yoonseok
    Wang, Hong
    [J]. IEEE MICRO, 2018, 38 (01) : 82 - 99
  • [7] Diehl Peter U, 2015, IEEE IJCNN, P1, DOI [10.1109/IJCNN.2015.7280696, DOI 10.1109/IJCNN.2015.7280696]
  • [8] Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity
    Florian, Razvan V.
    [J]. NEURAL COMPUTATION, 2007, 19 (06) : 1468 - 1502
  • [9] Gerstner W., 2002, Spiking Neuron Models, DOI DOI 10.1017/CBO9780511815706
  • [10] A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning
    Kappel, David
    Legenstein, Robert
    Habenschuss, Stefan
    Hsieh, Michael
    Maass, Wolfgang
    [J]. ENEURO, 2018, 5 (02)