Exploring 8-bit Arithmetic for Training Spiking Neural Networks

被引:0
|
作者
Fernandez-Hart, T. [1 ]
Kalganova, T. [1 ]
Knight, James C. [2 ]
机构
[1] Brunel Univ, CEDPS, Dept Elect & Comp Engn, Uxbridge UB8 3PH, Middx, England
[2] Univ Sussex, Sch Engn & Informat, Brighton BN1 9QJ, E Sussex, England
来源
2024 IEEE INTERNATIONAL CONFERENCE ON OMNI-LAYER INTELLIGENT SYSTEMS, COINS 2024 | 2024年
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/COINS61597.2024.10622154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs) offer advantages over traditional Artificial Neural Networks (ANNs) in terms of biological plausibility, noise tolerance, and temporal processing capabilities. Additionally, SNNs can achieve significant energy efficiency when deployed on specialized neuromorphic platforms. However, the common practice of training SNNs using Back Propagation Through Time (BPTT) on GPUs before deployment is resource-intensive and hinders scalability. Although reduced precision inference with SNNs has been explored, the use of reduced precision during training remains largely unexamined. This study investigates the potential of posit arithmetic, a novel numerical format, for training SNNs on future posit-enabled accelerators. We evaluate the performance of 8-bit posit and floating-point arithmetic compared to 32-bit floating-point on two datasets. Our results show that 8-bit posits can match the performance of 32-bit floating-point arithmetic when all training components are quantised. These findings suggest that posit arithmetic could be a promising foundation for developing efficient hardware accelerators dedicated to SNN training. Such advancements are essential for reducing resource usage and enhancing energy efficiency, enabling the exploration of larger and more complex SNN architectures, and promoting their wider adoption.
引用
收藏
页码:380 / 385
页数:6
相关论文
共 50 条
  • [1] Scalable Methods for 8-bit Training of Neural Networks
    Banner, Ron
    Hubara, Itay
    Hoffer, Elad
    Soudry, Daniel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] Training Deep Neural Networks with 8-bit Floating Point Numbers
    Wang, Naigang
    Choi, Jungwook
    Brand, Daniel
    Chen, Chia-Yu
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks
    Sun, Xiao
    Choi, Jungwook
    Chen, Chia-Yu
    Wang, Naigang
    Venkataramani, Swagath
    Srinivasan, Vijayalakshmi
    Cui, Xiaodong
    Zhang, Wei
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Training Deep Neural Networks in 8-bit Fixed Point with Dynamic Shared Exponent Management
    Yamaguchi, Hisakatsu
    Ito, Makiko
    Yoda, Katsu
    Ike, Atsushi
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 1536 - 1541
  • [5] ERSFQ 8-Bit Parallel Arithmetic Logic Unit
    Kirichenko, Alex F.
    Vernik, Igor V.
    Kamkar, Michael Y.
    Walter, Jason
    Miller, Maximilian
    Albu, Lucian Remus
    Mukhanov, Oleg A.
    IEEE TRANSACTIONS ON APPLIED SUPERCONDUCTIVITY, 2019, 29 (05)
  • [6] Clipping-Based Post Training 8-Bit Quantization of Convolution Neural Networks for Object Detection
    Chen, Leisheng
    Lou, Peihuang
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [7] Quantizaiton for Deep Neural Network Training with 8-bit Dynamic Fixed Point
    Sakai, Yasufumi
    2020 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2020), 2020, : 126 - 130
  • [8] Training high-performance and large-scale deep neural networks with full 8-bit integers
    Yang, Yukuan
    Deng, Lei
    Wu, Shuang
    Yan, Tianyi
    Xie, Yuan
    Li, Guoqi
    NEURAL NETWORKS, 2020, 125 : 70 - 82
  • [9] Optimization of MLP Neural Networks in 8-bit Microcontrollers using Program Memory
    Guimaraes, Caio J. B., V
    Torquato, Matheus E.
    Fernandes, Macelo A. C.
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
    Zhen, Kai
    Nguyen, Hieu Duy
    Chinta, Raviteja
    Susanj, Nathan
    Mouchtaris, Athanasios
    Afzal, Tariq
    Rastrow, Ariya
    INTERSPEECH 2022, 2022, : 3033 - 3037