Trapezoidal Gradient Descent for Effective Reinforcement Learning in Spiking Networks

被引:0
|
作者
Pan, Yuhao [1 ]
Wang, Xiucheng [2 ]
Cheng, Nan [2 ]
Qiu, Qi [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
关键词
SNN; reinforcement learning; spike network; trapezoidal function;
D O I
10.1109/UCOM62433.2024.10695930
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the rapid development of artificial intelligence technology, the field of reinforcement learning has continuously achieved breakthroughs in both theory and practice. However, traditional reinforcement learning algorithms often entail high energy consumption during interactions with the environment. Spiking Neural Network (SNN), with their low energy consumption characteristics and performance comparable to deep neural networks, have garnered widespread attention. To reduce the energy consumption of practical applications of reinforcement learning, researchers have successively proposed the Pop-SAN and MDC-SAN algorithms. Nonetheless, these algorithms use rectangular functions to approximate the spike network during the training process, resulting in low sensitivity, thus indicating room for improvement in the training effectiveness of SNN. Based on this, we propose a trapezoidal approximation gradient method to replace the spike network, which not only preserves the original stable learning state but also enhances the model's adaptability and response sensitivity under various signal dynamics. Simulation results show that the improved algorithm, using the trapezoidal approximation gradient to replace the spike network, achieves better convergence speed and performance compared to the original algorithm and demonstrates good training stability.
引用
收藏
页码:192 / 196
页数:5
相关论文
共 50 条
  • [1] Smooth Exact Gradient Descent Learning in Spiking Neural Networks
    Klos, Christian
    Memmesheimer, Raoul-Martin
    PHYSICAL REVIEW LETTERS, 2025, 134 (02)
  • [2] Gradient Descent for Spiking Neural Networks
    Huh, Dongsung
    Sejnowski, Terrence J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Meta-learning spiking neural networks with surrogate gradient descent
    Stewart, Kenneth M.
    Neftci, Emre O.
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2022, 2 (04):
  • [4] Gradient descent for general reinforcement learning
    Baird, L
    Moore, A
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 11, 1999, 11 : 968 - 974
  • [5] Fractional Gradient Descent Method for Spiking Neural Networks
    Yang, Honggang
    Chen, Jiejie
    Jiang, Ping
    Xu, Mengfei
    Zhao, Haiming
    2023 2ND CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, CFASTA, 2023, : 636 - 641
  • [6] A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks
    Xu, Yan
    Zeng, Xiaoqin
    Han, Lixin
    Yang, Jing
    NEURAL NETWORKS, 2013, 43 : 99 - 113
  • [7] One-Pass Online Learning Based on Gradient Descent for Multilayer Spiking Neural Networks
    Lin, Xianghong
    Hu, Tiandou
    Wang, Xiangwen
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (01) : 16 - 31
  • [8] Sparse Spiking Gradient Descent
    Perez-Nieves, Nicolas
    Goodman, Dan F. M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [9] Fast Stochastic Kalman Gradient Descent for Reinforcement Learning
    Totaro, Simone
    Jonsson, Anders
    LEARNING FOR DYNAMICS AND CONTROL, VOL 144, 2021, 144
  • [10] Learning in neural networks by reinforcement of irregular spiking
    Xie, XH
    Seung, HS
    PHYSICAL REVIEW E, 2004, 69 (04): : 10