Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks

被引:2
作者
Kim, Youngeun [1 ]
Li, Yuhang [1 ]
Moitra, Abhishek [1 ]
Yin, Ruokai [1 ]
Panda, Priyadarshini [1 ]
机构
[1] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
基金
美国国家科学基金会;
关键词
spiking neural network; image recognition; event-based processing; energy-efficient deep learning; neuromorphic computing;
D O I
10.3389/fnins.2023.1230002
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to similar to 4.3x forward memory efficiency and similar to 21.9x backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at .
引用
收藏
页数:15
相关论文
共 69 条
  • [1] True North: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip
    Akopyan, Filipp
    Sawada, Jun
    Cassidy, Andrew
    Alvarez-Icaza, Rodrigo
    Arthur, John
    Merolla, Paul
    Imam, Nabil
    Nakamura, Yutaka
    Datta, Pallab
    Nam, Gi-Joon
    Taba, Brian
    Beakes, Michael
    Brezzo, Bernard
    Kuang, Jente B.
    Manohar, Rajit
    Risk, William P.
    Jackson, Bryan
    Modha, Dharmendra S.
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2015, 34 (10) : 1537 - 1557
  • [2] Anguita D, 2013, ESANN, V3, P3, DOI DOI 10.3390/S20082200
  • [3] [Anonymous], 2015, IEEE IJCNN
  • [4] Coarse-Fine Convolutional Deep-Learning Strategy for Human Activity Recognition
    Aviles-Cruz, Carlos
    Ferreyra-Ramirez, Andres
    Zuniga-Lopez, Arturo
    Villegas-Cortez, Juan
    [J]. SENSORS, 2019, 19 (07):
  • [5] Shrestha SB, 2018, Arxiv, DOI arXiv:1810.08646
  • [6] Bing Han, 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings, P13555, DOI 10.1109/CVPR42600.2020.01357
  • [7] Che KW, 2022, ADV NEUR IN
  • [8] Chen Y., 2021, arXiv
  • [9] Chowdhury S. S., 2021, arXiv
  • [10] Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
    Chowdhury, Sayeed Shafayet
    Garg, Isha
    Roy, Kaushik
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,