A Low Power and Low Latency FPGA-Based Spiking Neural Network Accelerator

被引:8
|
作者
Liu, Hanwen [1 ]
Chen, Yi [1 ]
Zeng, Zihang [2 ]
Zhang, Malu [1 ]
Qu, Hong [1 ]
机构
[1] Univ Elect Sci & Technol China, Dept Comp Sci & Engn, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Glasgow Coll, Chengdu, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
美国国家科学基金会;
关键词
Spiking Neural Networks; FPGA; Neuromorphic Accelerator; ON-CHIP; IMPLEMENTATION; PROCESSOR; SYSTEM;
D O I
10.1109/IJCNN54540.2023.10191153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), known as the third generation of the neural network, are famous for their biological plausibility and brain-like characteristics. Recent efforts further demonstrate the potential of SNNs in high-speed inference by designing accelerators with the parallelism of temporal or spatial dimensions. However, with the limitation of hardware resources, the accelerator designs must utilize off-chip memory to store many intermediate data, which leads to both high power consumption and long latency. In this paper, we focus on the data flow between layers to improve arithmetic efficiency. Based on the spike discrete property, we design a convolution-pooling(CONVP) unit that fuses the processing of the convolutional layer and pooling layer to reduce latency and resource utilization. Furthermore, for the fully-connected layer, we apply intra-output parallelism and inter-output parallelism to accelerate network inference. We demonstrate the effectiveness of our proposed hardware architecture by implementing different SNN models with the different datasets on a Zynq XA7Z020 FPGA. The experiments show that our accelerator can achieve about x28 inference speed up with a competitive power compared with FPGA implementation on MNIST dataset and a x15 inference speed up with low power compared with ASIC design on DVSGesture dataset.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] A reconfigurable FPGA-based spiking neural network accelerator
    Yin, Mingqi
    Cui, Xiaole
    Wei, Feng
    Liu, Hanqing
    Jiang, Yuanyuan
    Cui, Xiaoxin
    MICROELECTRONICS JOURNAL, 2024, 152
  • [2] An FPGA-Based Low-Latency Accelerator for Randomly Wired Neural Networks
    Kuramochi, Ryosuke
    Nakahara, Hiroki
    2020 30TH INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2020, : 298 - 303
  • [3] Spiking Neural Network Based Low-Power Radioisotope Identification using FPGA
    Huang, Xiaoyu
    Jones, Edward
    Zhang, Siru
    Xie, Shouyu
    Furber, Steve
    Goulermas, Yannis
    Marsden, Edward
    Baistow, Ian
    Mitra, Srinjoy
    Hamilton, Alister
    2020 27TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2020,
  • [4] Low-Power FPGA-Based Spiking Neural Networks for Real-Time Decoding of Intracortical Neural Activity
    Martis, Luca
    Leone, Gianluca
    Raffo, Luigi
    Meloni, Paolo
    IEEE SENSORS JOURNAL, 2024, 24 (24) : 42448 - 42459
  • [5] Flexible Deep-pipelined FPGA-based Accelerator for Spiking Neural Networks
    Lopez-Asuncion, Samuel
    Ituero Herrero, Pablo
    2023 38TH CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS, DCIS, 2023,
  • [6] Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator
    Neil, Daniel
    Liu, Shih-Chii
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2014, 22 (12) : 2621 - 2628
  • [7] A FPGA-based Hardware Accelerator for Bayesian Confidence Propagation Neural Network
    Liu, Lizheng
    Wang, Deyu
    Wang, Yuning
    Lansner, Anders
    Hemani, Ahmed
    Yang, Yu
    Hu, Xiaoming
    Zou, Zhuo
    Zheng, Lirong
    2020 IEEE NORDIC CIRCUITS AND SYSTEMS CONFERENCE (NORCAS), 2020,
  • [8] An FPGA-based Hybrid Neural Network accelerator for embedded satellite image classification
    Lemaire, Edgar
    Moretti, Matthieu
    Daniel, Lionel
    Miramond, Benoit
    Millet, Philippe
    Feresin, Frederic
    Bilavarn, Sebastien
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [9] A power-efficient spiking convolutional neural network accelerator based on temporal parallelism and streaming dataflow
    Zhang, Jian
    Wang, Yong
    Zhang, Yanlong
    Bi, Bo
    Chen, Qiliang
    Cai, Yimao
    MICROELECTRONICS JOURNAL, 2025, 158
  • [10] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saha, Saunak
    Duwe, Henry
    Zambreno, Joseph
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2020, 92 (09): : 907 - 929