NeuroMoCo: a neuromorphic momentum contrast learning method for spiking neural networks

被引:0
作者
Ma, Yuqi [1 ,2 ]
Wang, Huamin [1 ,2 ]
Shen, Hangchi [1 ,2 ]
Chen, Xuemei [1 ,2 ]
Duan, Shukai [1 ,2 ]
Wen, Shiping [3 ]
机构
[1] Southwest Univ, Chongqing 400715, Peoples R China
[2] Chongqing Key Lab Brain Inspired Comp & Intelligen, Chongqing 400715, Peoples R China
[3] Univ Technol Sydney, Australian Inst Artificial Ieintelligence, Sydney 2007, Australia
基金
中国国家自然科学基金;
关键词
Spiking neural networks; Contrastive learning; Self-supervised pre-training; Neuromorphic datasets; Image classification; INTELLIGENCE; VISION;
D O I
10.1007/s10489-024-05982-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, brain-inspired spiking neural networks (SNNs) have attracted great research attention owing to their inherent bio-interpretability, event-triggered properties and powerful perception of spatiotemporal information, which is beneficial to handling event-based neuromorphic datasets. In contrast to conventional static image datasets, event-based neuromorphic datasets present heightened complexity in feature extraction due to their distinctive time series and sparsity characteristics, which influences their classification accuracy. To overcome this challenge, a novel approach termed Neuromorphic Momentum Contrast Learning (NeuroMoCo) for SNNs is introduced in this paper by extending the benefits of self-supervised pre-training to SNNs to effectively stimulate their potential. This is the first time that self-supervised learning (SSL) based on momentum contrastive learning is realized in SNNs. In addition, we devise a novel loss function named MixInfoNCE tailored to their temporal characteristics to further increase the classification accuracy of neuromorphic datasets, which is verified through rigorous ablation experiments. Finally, experiments on DVS-CIFAR10, DVS128Gesture and N-Caltech101 have shown that NeuroMoCo of this paper establishes new state-of-the-art (SOTA) benchmarks: 83.6% (Spikformer-2-256), 98.62% (Spikformer-2-256), and 84.4% (SEW-ResNet-18), respectively.
引用
收藏
页数:13
相关论文
共 51 条
[1]   A Low Power, Fully Event-Based Gesture Recognition System [J].
Amir, Arnon ;
Taba, Brian ;
Berg, David ;
Melano, Timothy ;
McKinstry, Jeffrey ;
Di Nolfo, Carmelo ;
Nayak, Tapan ;
Andreopoulos, Alexander ;
Garreau, Guillaume ;
Mendoza, Marcela ;
Kusnitz, Jeff ;
Debole, Michael ;
Esser, Steve ;
Delbruck, Tobi ;
Flickner, Myron ;
Modha, Dharmendra .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7388-7397
[2]   Self-Supervised Contrastive Learning In Spiking Neural Networks [J].
Bahariasl, Yeganeh ;
Kheradpisheh, Saeed Reza .
PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, :181-185
[3]   An Attention-Based Spiking Neural Network for Unsupervised Spike-Sorting [J].
Bernert, Marie ;
Yvert, Blaise .
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2019, 29 (08)
[4]  
Brown TB, 2020, ADV NEUR IN, V33
[5]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[6]   Event-Based Neuromorphic Vision for Autonomous Driving: A Paradigm Shift for Bio-Inspired Visual Sensing and Perception [J].
Chen, Guang ;
Cao, Hu ;
Conradt, Jorg ;
Tang, Huajin ;
Rohrbein, Florian ;
Knoll, Alois .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (04) :34-49
[7]  
Chen T., 2020, INT C MACH LEARN PML, P1597, DOI DOI 10.48550/ARXIV.2002.05709
[8]  
Chen XL, 2020, Arxiv, DOI [arXiv:2003.04297, DOI 10.48550/ARXIV.2003.04297]
[9]   An Empirical Study of Training Self-Supervised Vision Transformers [J].
Chen, Xinlei ;
Xie, Saining ;
He, Kaiming .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9620-9629
[10]   ECSNet: Spatio-Temporal Feature Learning for Event Camera [J].
Chen, Zhiwen ;
Wu, Jinjian ;
Hou, Junhui ;
Li, Leida ;
Dong, Weisheng ;
Shi, Guangming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (02) :701-712