Exploiting memristive autapse and temporal distillation for training spiking neural networks

被引:0
作者
Chen, Tao [1 ]
Duan, Shukai [1 ,2 ,3 ]
Wang, Lidan [1 ,2 ,3 ,4 ,5 ]
机构
[1] Southwest Univ, Coll Artificial Intelligence, Chongqing 400715, Peoples R China
[2] Chongqing Key Lab Brain inspired Comp & Intelligen, Chongqing 400715, Peoples R China
[3] Natl & Local Joint Engn Lab Intelligent Transmiss, Chongqing 400715, Peoples R China
[4] Southwest Univ, Minist Educ, Key Lab Luminescence Anal & Mol Sensing, Chongqing 400715, Peoples R China
[5] State Key Lab Intelligent Vehicle Safety Technol, Chongqing 400023, Peoples R China
基金
中国国家自然科学基金;
关键词
Spiking neural network; Memristive autapse; Adaptive self-feedback connections; Temporal information; Knowledge distillation; NEURONS;
D O I
10.1016/j.knosys.2024.112627
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) have attracted widespread attention due to their brain-inspired information processing mechanism and low power, sparse accumulation computation on neuromorphic chips. The surrogate gradient method makes it possible to train deep SNNs using backpropagation and shows satisfactory performance on some tasks. However, as the network structure becomes deeper, the spike information may fail to transmit to deeper layers, thus causing the output layer to make wrong predictions in recognition tasks. Inspired by the autaptic structure in the cerebral cortex, which is formed by axons connecting to their own dendrites and capable of modulating neuronal activity, we use discrete memristors to build feedback-connected autapses to adaptively regulate the precision of the spikes. Further, to prevent outlier at a certain time step from affecting the overall output, we distill the averaged knowledge into sub-models at each time step to correct potential errors. By combining these two proposed methods, we propose a deep SNNs optimized by Leaky Integrate-and-Fire (LIF) model with memristive autapse and temporal distillation, referred to as MASNN. A series of experiments on static datasets (CIFAR10 and CIFAR100) as well as neuromorphic datasets (DVS-CIFAR10 and N-Caltech101) demonstrated the competitiveness of the proposed model and validated the effectiveness of its components. Code for MA-SNN is available at: https://github.com/CHNtao/MA-SNN.
引用
收藏
页数:12
相关论文
共 53 条
[21]  
Li H, 2018, ADV NEUR IN, V31
[22]   CIFAR10-DVS: An Event-Stream Dataset for Object Classification [J].
Li, Hongmin ;
Liu, Hanchao ;
Ji, Xiangyang ;
Li, Guoqi ;
Shi, Luping .
FRONTIERS IN NEUROSCIENCE, 2017, 11
[23]   Deep Hybrid 2-D-3-D CNN Based on Dual Second-Order Attention With Camera Spectral Sensitivity Prior for Spectral Super-Resolution [J].
Li, Jiaojiao ;
Wu, Chaoxiong ;
Song, Rui ;
Li, Yunsong ;
Xie, Weiying ;
He, Lihuo ;
Gao, Xinbo .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) :623-634
[24]   Neuromorphic Data Augmentation for Training Spiking Neural Networks [J].
Li, Yuhang ;
Kim, Youngeun ;
Park, Hyoungseob ;
Geller, Tamar ;
Panda, Priyadarshini .
COMPUTER VISION, ECCV 2022, PT VII, 2022, 13667 :631-649
[25]   Backpropagation and the brain [J].
Lillicrap, Timothy P. ;
Santoro, Adam ;
Marris, Luke ;
Akerman, Colin J. ;
Hinton, Geoffrey .
NATURE REVIEWS NEUROSCIENCE, 2020, 21 (06) :335-346
[26]   Firing multistability in a locally active memristive neuron model [J].
Lin, Hairong ;
Wang, Chunhua ;
Sun, Yichuang ;
Yao, Wei .
NONLINEAR DYNAMICS, 2020, 100 (04) :3667-3683
[27]   Networks of spiking neurons: The third generation of neural network models [J].
Maass, W .
NEURAL NETWORKS, 1997, 10 (09) :1659-1671
[28]   Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks [J].
Neftci, Emre O. ;
Mostafa, Hesham ;
Zenke, Friedemann .
IEEE SIGNAL PROCESSING MAGAZINE, 2019, 36 (06) :51-63
[29]   Experimental Study of Artificial Neural Networks Using a Digital Memristor Simulator [J].
Ntinas, Vasileios ;
Vourkas, Ioannis ;
Abusleme, Angel ;
Sirakoulis, Georgios Ch. ;
Rubio, Antonio .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) :5098-5110
[30]  
Orchard G, 2015, FRONT NEUROSCI-SWITZ, V9, DOI [10.3389/fhins.2015.00437, 10.3389/fnins.2015.00437]