A Low Power and Low Latency FPGA-Based Spiking Neural Network Accelerator

被引:8
|
作者
Liu, Hanwen [1 ]
Chen, Yi [1 ]
Zeng, Zihang [2 ]
Zhang, Malu [1 ]
Qu, Hong [1 ]
机构
[1] Univ Elect Sci & Technol China, Dept Comp Sci & Engn, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Glasgow Coll, Chengdu, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
美国国家科学基金会;
关键词
Spiking Neural Networks; FPGA; Neuromorphic Accelerator; ON-CHIP; IMPLEMENTATION; PROCESSOR; SYSTEM;
D O I
10.1109/IJCNN54540.2023.10191153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), known as the third generation of the neural network, are famous for their biological plausibility and brain-like characteristics. Recent efforts further demonstrate the potential of SNNs in high-speed inference by designing accelerators with the parallelism of temporal or spatial dimensions. However, with the limitation of hardware resources, the accelerator designs must utilize off-chip memory to store many intermediate data, which leads to both high power consumption and long latency. In this paper, we focus on the data flow between layers to improve arithmetic efficiency. Based on the spike discrete property, we design a convolution-pooling(CONVP) unit that fuses the processing of the convolutional layer and pooling layer to reduce latency and resource utilization. Furthermore, for the fully-connected layer, we apply intra-output parallelism and inter-output parallelism to accelerate network inference. We demonstrate the effectiveness of our proposed hardware architecture by implementing different SNN models with the different datasets on a Zynq XA7Z020 FPGA. The experiments show that our accelerator can achieve about x28 inference speed up with a competitive power compared with FPGA implementation on MNIST dataset and a x15 inference speed up with low power compared with ASIC design on DVSGesture dataset.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Implementation of FPGA-based Accelerator for Deep Neural Networks
    Tsai, Tsung-Han
    Ho, Yuan-Chen
    Sheu, Ming-Hwa
    2019 IEEE 22ND INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS (DDECS), 2019,
  • [32] sEMG-Based Gesture Recognition with Spiking Neural Networks on Low-Power FPGA
    Scrugli, Matteo Antonio
    Leone, Gianluca
    Busia, Paola
    Meloni, Paolo
    DESIGN AND ARCHITECTURES FOR SIGNAL AND IMAGE PROCESSING, DASIP 2024, 2024, 14622 : 15 - 26
  • [33] 4-Gbps low-latency FPGA-based underwater wireless optical communication
    Zhang, Tianyi
    Fei, Chao
    Wang, Yuan
    Du, Ji
    Xie, Yitong
    Zhang, Fei
    Tian, Jiahan
    Zhang, Guowu
    Wang, Gaoxuan
    Hong, Xiaojian
    He, Sailing
    OPTICS EXPRESS, 2024, 32 (21): : 36207 - 36222
  • [34] Efficient Neuron Architecture for FPGA-based Spiking Neural Networks
    Wan, Lei
    Luo, Yuling
    Song, Shuxiang
    Harkin, Jim
    Liu, Junxiu
    2016 27TH IRISH SIGNALS AND SYSTEMS CONFERENCE (ISSC), 2016,
  • [35] FPGA-Based Pulse Compressor for Ultra Low Latency Visible Light Communications
    Ricci, Stefano
    Caputo, Stefano
    Mucchi, Lorenzo
    ELECTRONICS, 2023, 12 (02)
  • [36] FPGA-based Low-Latency Digital Servo for Optical Physics Experiments
    Pomponio, Marco
    Hati, Archita
    Nelson, Craig
    2020 JOINT CONFERENCE OF THE IEEE INTERNATIONAL FREQUENCY CONTROL SYMPOSIUM AND INTERNATIONAL SYMPOSIUM ON APPLICATIONS OF FERROELECTRICS (IFCS-ISAF), 2020,
  • [37] Evolutionary FPGA-Based Spiking Neural Networks for Continual Learning
    Otero, Andres
    Sanllorente, Guillermo
    de la Torre, Eduardo
    Nunez-Yanez, Jose
    APPLIED RECONFIGURABLE COMPUTING. ARCHITECTURES, TOOLS, AND APPLICATIONS, ARC 2023, 2023, 14251 : 260 - 274
  • [38] FPGA-based Convolutional Neural Network Accelerator design using High Level Synthesize
    Ghaffari, Sina
    Sharifian, Saeed
    2016 2ND INTERNATIONAL CONFERENCE OF SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2016, : 29 - 34
  • [39] A High Utilization FPGA-Based Accelerator for Variable-Scale Convolutional Neural Network
    Li, Xin
    Cai, Yujie
    Han, Jun
    Zeng, Xiaoyang
    2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 944 - 947
  • [40] FPGA-Based High-Performance Data Compression Deep Neural Network Accelerator
    Wang, Hanze
    Fu, Yingxun
    Ma, Li
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 563 - 569