A Spatial-Channel-Temporal-Fused Attention for Spiking Neural Networks

被引:5
作者
Cai, Wuque [1 ]
Sun, Hongze [1 ]
Liu, Rui [1 ]
Cui, Yan [2 ]
Wang, Jun [1 ]
Xia, Yang [1 ]
Yao, Dezhong [1 ,3 ,4 ]
Guo, Daqing [1 ]
机构
[1] Univ Elect Sci & Technol China, Clin Hosp Chengdu Brain Sci Inst, Sch Life Sci & Technol, Minist Educ MOE Key Lab NeuroInformat, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Sichuan Prov Peoples Hosp, Dept Neurosurg, Chengdu 610072, Peoples R China
[3] Chinese Acad Med Sci, Res Unit NeuroInformat 2019RU035, Chengdu 611731, Peoples R China
[4] Zhengzhou Univ, Sch Elect Engn, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Neurons; Visualization; Biological system modeling; Training; Computational modeling; Membrane potentials; Biological neural networks; Event streams; predictive attentional remapping; spatial-channel-temporal-fused attention (SCTFA); spiking neural networks (SNNs); visual attention; VISUAL-ATTENTION;
D O I
10.1109/TNNLS.2023.3278265
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) mimic brain computational strategies, and exhibit substantial capabilities in spatiotemporal information processing. As an essential factor for human perception, visual attention refers to the dynamic process for selecting salient regions in biological vision systems. Although visual attention mechanisms have achieved great success in computer vision applications, they are rarely introduced into SNNs. Inspired by experimental observations on predictive attentional remapping, we propose a new spatial-channel-temporal-fused attention (SCTFA) module that can guide SNNs to efficiently capture underlying target regions by utilizing accumulated historical spatial-channel information in the present study. Through a systematic evaluation on three event stream datasets (DVS Gesture, SL-Animals-DVS, and MNIST-DVS), we demonstrate that the SNN with the SCTFA module (SCTFA-SNN) not only significantly outperforms the baseline SNN (BL-SNN) and two other SNN models with degenerated attention modules, but also achieves competitive accuracy with the existing state-of-the-art (SOTA) methods. Additionally, our detailed analysis shows that the proposed SCTFA-SNN model has strong robustness to noise and outstanding stability when faced with incomplete data, while maintaining acceptable complexity and efficiency. Overall, these findings indicate that incorporating appropriate cognitive mechanisms of the brain may provide a promising approach to elevate the capabilities of SNNs.
引用
收藏
页码:14315 / 14329
页数:15
相关论文
共 50 条
  • [21] SCANN: Side Channel Analysis of Spiking Neural Networks
    Nagarajan, Karthikeyan
    Roy, Rupshali
    Topaloglu, Rasit Onur
    Kannan, Sachhidh
    Ghosh, Swaroop
    CRYPTOGRAPHY, 2023, 7 (02)
  • [22] A Spatial-Temporal Recurrent Neural Network for Video Saliency Prediction
    Zhang, Kao
    Chen, Zhenzhong
    Liu, Shan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 572 - 587
  • [23] Mastering the Output Frequency in Spiking Neural Networks
    Falez, Pierre
    Tirilly, Pierre
    Bilasco, Ioan Marius
    Devienne, Philippe
    Boulet, Pierre
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [24] Proposal of a Control Algorithm for Multiagent Cooperation Using Spiking Neural Networks
    Barton, Adam
    Volna, Eva
    Kotyrba, Martin
    Jarusek, Robert
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) : 2016 - 2027
  • [25] An Efficient Learning Algorithm for Direct Training Deep Spiking Neural Networks
    Zhu, Xiaolei
    Zhao, Baixin
    Ma, De
    Tang, Huajin
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (03) : 847 - 856
  • [26] A Survey of Intelligent Chip Design Research Based on Spiking Neural Networks
    Chen, Lu
    Xiong, Xingzhong
    Liu, Jun
    IEEE ACCESS, 2022, 10 : 89663 - 89686
  • [27] Unlocking the Potential of Spiking Neural Networks: Understanding the What, Why, and Where
    Wickramasinghe, Buddhi
    Chowdhury, Sayeed Shafayet
    Kosta, Adarsh Kumar
    Ponghiran, Wachirawit
    Roy, Kaushik
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (05) : 1648 - 1663
  • [28] Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks
    Zhang, Anguo
    Li, Xiumin
    Gao, Yueming
    Niu, Yuzhen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (05) : 1986 - 1995
  • [29] Transformer-Based Spiking Neural Networks for Multimodal Audiovisual Classification
    Guo, Lingyue
    Gao, Zeyu
    Qu, Jinye
    Zheng, Suiwu
    Jiang, Runhao
    Lu, Yanfeng
    Qiao, Hong
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 1077 - 1086
  • [30] Temporal Dependent Local Learning for Deep Spiking Neural Networks
    Ma, Chenxiang
    Xu, Junhai
    Yu, Qiang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,