Smooth Exact Gradient Descent Learning in Spiking Neural Networks

被引:0
作者
Klos, Christian [1 ]
Memmesheimer, Raoul-Martin [1 ]
机构
[1] Univ Bonn, Inst Genet, Neural Network Dynam & Computat, D-53115 Bonn, Germany
关键词
ERROR-BACKPROPAGATION; NEURONS; SIMULATION; SPARSE; CHAOS; MODEL; FIRE;
D O I
10.1103/PhysRevLett.134.027301
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Gradient descent prevails in artificial neural network training, but seems inept for spiking neural networks as small parameter changes can cause sudden, disruptive appearances and disappearances of spikes. Here, we demonstrate exact gradient descent based on continuously changing spiking dynamics. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where it cannot influence subsequent dynamics. This also enables gradient-based spike addition and removal. We illustrate our scheme with various tasks and setups, including recurrent and deep, initially silent networks.
引用
收藏
页数:8
相关论文
共 95 条
  • [91] Wallach H., 2018, Advances in Neural Information Processing Systems, V31, P1439
  • [92] Sparse and powerful cortical spikes
    Wolfe, Jason
    Houweling, Arthur R.
    Brecht, Michael
    [J]. CURRENT OPINION IN NEUROBIOLOGY, 2010, 20 (03) : 306 - 312
  • [93] Event-based backpropagation can compute exact gradients for spiking neural networks
    Wunderlich, Timo C.
    Pehle, Christian
    [J]. SCIENTIFIC REPORTS, 2021, 11 (01)
  • [94] SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks
    Zenke, Friedemann
    Ganguli, Surya
    [J]. NEURAL COMPUTATION, 2018, 30 (06) : 1514 - 1541
  • [95] Zhuang J., 2020, NEURIPS 2020 WORKSH