Unlocking the Potential of Spiking Neural Networks: Understanding the What, Why, and Where

被引:5
作者
Wickramasinghe, Buddhi [1 ]
Chowdhury, Sayeed Shafayet [1 ]
Kosta, Adarsh Kumar [1 ]
Ponghiran, Wachirawit [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Neurons; Training; Task analysis; Encoding; Membrane potentials; Computational modeling; Recurrent neural networks; Automatic speech recognition; neuromorphic computing; optical flow estimation; spiking neural networks (SNNs); MEMORY;
D O I
10.1109/TCDS.2023.3329747
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) are a promising avenue for machine learning with superior energy efficiency compared to traditional artificial neural networks (ANNs). Recent advances in training and input encoding have put SNNs on par with state-of-the-art ANNs in image classification. However, such tasks do not utilize the internal dynamics of SNNs fully. Notably, a spiking neuron's membrane potential acts as an internal memory, merging incoming inputs sequentially. This recurrent dynamic enables the networks to learn temporal correlations, making SNNs suitable for sequential learning. Such problems can also be tackled using ANNs. However, to capture the temporal dependencies, either the inputs have to be lumped over time (e.g., Transformers); or explicit recurrence needs to be introduced [e.g., recurrent neural networks (RNNs) and long-short-term memory (LSTM) networks], which incurs considerable complexity. To that end, we explore the capabilities of SNNs in providing lightweight solutions to four sequential tasks involving text, speech, and vision. Our results demonstrate that SNNs, by leveraging their intrinsic memory, can be an efficient alternative to RNNs and LSTMs for sequence processing, especially for certain edge applications. Furthermore, SNNs can be combined with ANNs (hybrid networks) synergistically to obtain the best of both worlds in terms of accuracy and efficiency.
引用
收藏
页码:1648 / 1663
页数:16
相关论文
共 102 条
  • [11] Bu T, 2023, Arxiv, DOI arXiv:2303.04347
  • [12] Recent Progress in Phase-Change Memory Technology
    Burr, Geoffrey W.
    Brightsky, Matthew J.
    Sebastian, Abu
    Cheng, Huai-Yu
    Wu, Jau-Yi
    Kim, Sangbum
    Sosa, Norma E.
    Papandreou, Nikolaos
    Lung, Hsiang-Lan
    Pozidis, Haralampos
    Eleftheriou, Evangelos
    Lam, Chung H.
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2016, 6 (02) : 146 - 162
  • [13] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Cao, Yongqiang
    Chen, Yang
    Khosla, Deepak
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 54 - 66
  • [14] Pathways to efficient neuromorphic computing with non-volatile memory technologies
    Chakraborty, I.
    Jaiswal, A.
    Saha, A. K.
    Gupta, S. K.
    Roy, K.
    [J]. APPLIED PHYSICS REVIEWS, 2020, 7 (02):
  • [15] Chankyu Lee, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12374), P366, DOI 10.1007/978-3-030-58526-6_22
  • [16] Cho KYHY, 2014, Arxiv, DOI arXiv:1409.1259
  • [17] Chowdhury S. S., 2021, arXiv
  • [18] Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning
    Chowdhury, Sayeed Shafayet
    Rathi, Nitin
    Roy, Kaushik
    [J]. COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 709 - 726
  • [19] Towards understanding the effect of leak in Spiking Neural Networks
    Chowdhury, Sayeed Shafayet
    Lee, Chankyu
    Roy, Kaushik
    [J]. NEUROCOMPUTING, 2021, 464 : 83 - 94
  • [20] Chung JY, 2014, Arxiv, DOI [arXiv:1412.3555, 10.48550/arXiv.1412.3555]