LC-TTFS: Toward Lossless Network Conversion for Spiking Neural Networks With TTFS Coding

被引:1
作者
Yang, Qu [1 ]
Zhang, Malu [2 ]
Wu, Jibin [3 ]
Tan, Kay Chen [3 ]
Li, Haizhou [1 ,4 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
[2] Univ Elect Sci & Technol China, Chengdu 611731, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Chinese Univ Hong Kong, CUHK Shenzhen, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Neurons; Encoding; Firing; Computational modeling; Biological neural networks; Artificial neural networks; Task analysis; Artificial neural network (ANN)-to-spiking neural network (SNN) conversion; deep spiking neural network; image classification; image reconstruction; speech enhancement; time-to-first-spike (TTFS) coding; MODEL;
D O I
10.1109/TCDS.2023.3334010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The biological neurons use precise spike times, in addition to the spike firing rate, to communicate with each other. The time-to-first-spike (TTFS) coding is inspired by such biological observation. However, there is a lack of effective solutions for training TTFS-based spiking neural network (SNN). In this article, we put forward a simple yet effective network conversion algorithm, which is referred to as lossless conversion (LC)-TTFS, by addressing two main problems that hinder an effective conversion from a high-performance artificial neural network (ANN) to a TTFS-based SNN. We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks, including image classification, image reconstruction, and speech enhancement. With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs. The study, therefore, paves the way for deploying ultralow-power TTFS-based SNNs on power-constrained edge computing platforms.
引用
收藏
页码:1626 / 1639
页数:14
相关论文
共 89 条
  • [41] Le Roux J, 2019, INT CONF ACOUST SPEE, P626, DOI 10.1109/ICASSP.2019.8683855
  • [42] Gradient-based learning applied to document recognition
    Lecun, Y
    Bottou, L
    Bengio, Y
    Haffner, P
    [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324
  • [43] Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures
    Lee, Chankyu
    Sarwar, Syed Shakib
    Panda, Priyadarshini
    Srinivasan, Gopalakrishnan
    Roy, Kaushik
    [J]. FRONTIERS IN NEUROSCIENCE, 2020, 14
  • [44] An End-to-End Spiking Neural Network Platform for Edge Robotics: From Event-Cameras to Central Pattern Generation
    Lele, Ashwin
    Fang, Yan
    Ting, Justin
    Raychowdhury, Arijit
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (03) : 1092 - 1103
  • [45] A Hybrid Loop Closure Detection Method Based on Brain-Inspired Models
    Li, Jiaxin
    Tang, Huajin
    Yan, Rui
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (04) : 1532 - 1543
  • [46] LIM JS, 1978, IEEE T ACOUST SPEECH, V26, P197, DOI 10.1109/TASSP.1978.1163086
  • [47] Liu Q, 2017, Arxiv, DOI arXiv:1706.03609
  • [48] Loizou P. C., 2007, Speech enhancement: theory and practice
  • [49] Loshchilov I., 2016, arXiv, DOI DOI 10.48550/ARXIV.1608.03983
  • [50] Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation
    Luo, Yi
    Mesgarani, Nima
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (08) : 1256 - 1266