Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks

被引:3
作者
Kim, Youngeun [1 ]
Li, Yuhang [1 ]
Moitra, Abhishek [1 ]
Yin, Ruokai [1 ]
Panda, Priyadarshini [1 ]
机构
[1] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
基金
美国国家科学基金会;
关键词
spiking neural network; image recognition; event-based processing; energy-efficient deep learning; neuromorphic computing;
D O I
10.3389/fnins.2023.1230002
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to similar to 4.3x forward memory efficiency and similar to 21.9x backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at .
引用
收藏
页数:15
相关论文
共 69 条
[11]  
Datta G, 2022, arXiv
[12]   Loihi: A Neuromorphic Manycore Processor with On-Chip Learning [J].
Davies, Mike ;
Srinivasa, Narayan ;
Lin, Tsung-Han ;
Chinya, Gautham ;
Cao, Yongqiang ;
Choday, Sri Harsha ;
Dimou, Georgios ;
Joshi, Prasad ;
Imam, Nabil ;
Jain, Shweta ;
Liao, Yuyun ;
Lin, Chit-Kwan ;
Lines, Andrew ;
Liu, Ruokun ;
Mathaikutty, Deepak ;
Mccoy, Steve ;
Paul, Arnab ;
Tse, Jonathan ;
Venkataramanan, Guruguhanathan ;
Weng, Yi-Hsin ;
Wild, Andreas ;
Yang, Yoonseok ;
Wang, Hong .
IEEE MICRO, 2018, 38 (01) :82-99
[13]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[14]   Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization [J].
Deng, Lei ;
Wu, Yujie ;
Hu, Yifan ;
Liang, Ling ;
Li, Guoqi ;
Hu, Xing ;
Ding, Yufei ;
Li, Peng ;
Xie, Yuan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (06) :2791-2805
[15]  
Fang W, 2022, Arxiv, DOI arXiv:2102.04159
[16]   Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks [J].
Fang, Wei ;
Yu, Zhaofei ;
Chen, Yanqi ;
Masquelier, Timothee ;
Huang, Tiejun ;
Tian, Yonghong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2641-2651
[17]   The SpiNNaker Project [J].
Furber, Steve B. ;
Galluppi, Francesco ;
Temple, Steve ;
Plana, Luis A. .
PROCEEDINGS OF THE IEEE, 2014, 102 (05) :652-665
[18]   Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems [J].
Guo, Wenzhe ;
Fouda, Mohammed E. ;
Yantir, Hasan Erdem ;
Eltawil, Ahmed M. ;
Salama, Khaled Nabil .
FRONTIERS IN NEUROSCIENCE, 2020, 14
[19]  
Guo Y., 2022, ADV NEURAL INFORM PR, P156
[20]   Reducing Information Loss for Spiking Neural Networks [J].
Guo, Yufei ;
Chen, Yuanpei ;
Zhang, Liwen ;
Wang, YingLei ;
Liu, Xiaode ;
Tong, Xinyi ;
Ou, Yuanyuan ;
Huang, Xuhui ;
Ma, Zhe .
COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 :36-52