Robust Spatiotemporal Prototype Learning for Spiking Neural Networks

被引:0
作者
Cai, Wuque [1 ]
Sun, Hongze [1 ]
Liao, Qianqian [1 ]
He, Jiayi [1 ]
Chen, Duo [1 ,2 ]
Yao, Dezhong [1 ,3 ,4 ]
Guo, Daqing [1 ]
机构
[1] Univ Elect Sci & Technol China, Clin Hosp Chengdu Brain Sci Inst, Sch Life Sci & Technol, MOE Key Lab NeuroInformat,China Cuba Belt & Rd Joi, Chengdu, Peoples R China
[2] Chongqing Univ Educ, Sch Artificial Intelligence, Chongqing, Peoples R China
[3] Chinese Acad Med Sci, Res Unit Neuro Informat 2019RU035, Chengdu 611731, Peoples R China
[4] Zhengzhou Univ, Sch Elect Engn, Zhengzhou 450001, Peoples R China
基金
中国博士后科学基金;
关键词
Decoding; Prototypes; Encoding; Robustness; Training; Spatiotemporal phenomena; Neurons; Learning systems; Data models; Biological system modeling; Robust learning; spatiotemporal prototype (STP); spiking neural networks (SNNs);
D O I
10.1109/TNNLS.2025.3583747
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) leverage their spike-driven nature to achieve high energy efficiency, positioning them as a promising alternative to traditional artificial neural networks (ANNs). The spiking decoder, a crucial component for output, significantly affects the performance of SNNs. However, current rate coding schemes for decoding of SNNs often lack robustness and do not have a training framework suitable for robust learning, while alternatives to rate coding generally produce worse overall performance. To address these challenges, we propose spatiotemporal prototype (STP) learning for SNNs, which uses multiple learnable binarized prototypes for distance-based decoding. In addition, we introduce a cotraining framework that jointly optimizes prototypes and model parameters, enabling mutual adaptation of the two components. STP learning clusters feature centers through supervised learning to ensure effective aggregation around the prototypes, while maintaining enough spacing between prototypes to handle noise and interference. This dual capability results in superior stability and robustness. On eight benchmark datasets with diverse challenges, the STP-SNN model achieves performance comparable to or superior to state-of-the-art methods. Notably, STP learning demonstrates exceptional robustness and stability in multitask experiments. Overall, these findings reveal that STP learning is an effective means of improving the performance and robustness of SNNs.
引用
收藏
页数:15
相关论文
共 58 条
[1]  
Afsaneh E., 2022, Brain Appar. Commun. J. Bacomics, V2
[2]   A Low Power, Fully Event-Based Gesture Recognition System [J].
Amir, Arnon ;
Taba, Brian ;
Berg, David ;
Melano, Timothy ;
McKinstry, Jeffrey ;
Di Nolfo, Carmelo ;
Nayak, Tapan ;
Andreopoulos, Alexander ;
Garreau, Guillaume ;
Mendoza, Marcela ;
Kusnitz, Jeff ;
Debole, Michael ;
Esser, Steve ;
Delbruck, Tobi ;
Flickner, Myron ;
Modha, Dharmendra .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7388-7397
[3]   Proposal of a Control Algorithm for Multiagent Cooperation Using Spiking Neural Networks [J].
Barton, Adam ;
Volna, Eva ;
Kotyrba, Martin ;
Jarusek, Robert .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) :2016-2027
[4]   Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks [J].
Bu, Tong ;
Ding, Jianhao ;
Hao, Zecheng ;
Yu, Zhaofei .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :7896-7906
[5]   The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks [J].
Cramer, Benjamin ;
Stradmann, Yannik ;
Schemmel, Johannes ;
Zenke, Friedemann .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) :2744-2757
[6]   Tensor decomposition based attention module for spiking neural networks [J].
Deng, Haoyu ;
Zhu, Ruijie ;
Qiu, Xuerui ;
Duan, Yule ;
Zhang, Malu ;
Deng, Liang-Jian .
KNOWLEDGE-BASED SYSTEMS, 2024, 295
[7]  
Deng S., 2022, P INT C LEARN REPR I, P1
[8]  
Ding Jianchuan, 2022, Advances in Neural Information Processing Systems
[9]  
Duan CT, 2022, ADV NEUR IN
[10]  
Fang W, 2021, ADV NEUR IN, V34