LC-TTFS: Toward Lossless Network Conversion for Spiking Neural Networks With TTFS Coding

被引:1
作者
Yang, Qu [1 ]
Zhang, Malu [2 ]
Wu, Jibin [3 ]
Tan, Kay Chen [3 ]
Li, Haizhou [1 ,4 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
[2] Univ Elect Sci & Technol China, Chengdu 611731, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Chinese Univ Hong Kong, CUHK Shenzhen, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Neurons; Encoding; Firing; Computational modeling; Biological neural networks; Artificial neural networks; Task analysis; Artificial neural network (ANN)-to-spiking neural network (SNN) conversion; deep spiking neural network; image classification; image reconstruction; speech enhancement; time-to-first-spike (TTFS) coding; MODEL;
D O I
10.1109/TCDS.2023.3334010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The biological neurons use precise spike times, in addition to the spike firing rate, to communicate with each other. The time-to-first-spike (TTFS) coding is inspired by such biological observation. However, there is a lack of effective solutions for training TTFS-based spiking neural network (SNN). In this article, we put forward a simple yet effective network conversion algorithm, which is referred to as lossless conversion (LC)-TTFS, by addressing two main problems that hinder an effective conversion from a high-performance artificial neural network (ANN) to a TTFS-based SNN. We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks, including image classification, image reconstruction, and speech enhancement. With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs. The study, therefore, paves the way for deploying ultralow-power TTFS-based SNNs on power-constrained edge computing platforms.
引用
收藏
页码:1626 / 1639
页数:14
相关论文
共 89 条
  • [1] Abdulbaqi J, 2019, Arxiv, DOI arXiv:1904.07294
  • [2] NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps
    Aimar, Alessandro
    Mostafa, Hesham
    Calabrese, Enrico
    Rios-Navarro, Antonio
    Tapiador-Morales, Ricardo
    Lungu, Iulia-Alexandra
    Milde, Moritz B.
    Corradi, Federico
    Linares-Barranco, Alejandro
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (03) : 644 - 656
  • [3] Bellec G, 2018, Arxiv, DOI arXiv:1803.09574
  • [4] Berouti M., 1979, ICASSP 79. 1979 IEEE International Conference on Acoustics, Speech and Signal Processing, P208
  • [5] Bing Han, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12355), P388, DOI 10.1007/978-3-030-58607-2_23
  • [6] Adaptive exponential integrate-and-fire model as an effective description of neuronal activity
    Brette, R
    Gerstner, W
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 2005, 94 (05) : 3637 - 3642
  • [7] Bu T., 2021, INT C LEARNING REPRE
  • [8] Bu T, 2022, AAAI CONF ARTIF INTE, P11
  • [9] Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    [J]. 2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, : 367 - 379
  • [10] Choi H.-S., 2018, P INT C LEARN REPR, P1