A Low Power and Low Latency FPGA-Based Spiking Neural Network Accelerator

被引:8
|
作者
Liu, Hanwen [1 ]
Chen, Yi [1 ]
Zeng, Zihang [2 ]
Zhang, Malu [1 ]
Qu, Hong [1 ]
机构
[1] Univ Elect Sci & Technol China, Dept Comp Sci & Engn, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Glasgow Coll, Chengdu, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
美国国家科学基金会;
关键词
Spiking Neural Networks; FPGA; Neuromorphic Accelerator; ON-CHIP; IMPLEMENTATION; PROCESSOR; SYSTEM;
D O I
10.1109/IJCNN54540.2023.10191153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs), known as the third generation of the neural network, are famous for their biological plausibility and brain-like characteristics. Recent efforts further demonstrate the potential of SNNs in high-speed inference by designing accelerators with the parallelism of temporal or spatial dimensions. However, with the limitation of hardware resources, the accelerator designs must utilize off-chip memory to store many intermediate data, which leads to both high power consumption and long latency. In this paper, we focus on the data flow between layers to improve arithmetic efficiency. Based on the spike discrete property, we design a convolution-pooling(CONVP) unit that fuses the processing of the convolutional layer and pooling layer to reduce latency and resource utilization. Furthermore, for the fully-connected layer, we apply intra-output parallelism and inter-output parallelism to accelerate network inference. We demonstrate the effectiveness of our proposed hardware architecture by implementing different SNN models with the different datasets on a Zynq XA7Z020 FPGA. The experiments show that our accelerator can achieve about x28 inference speed up with a competitive power compared with FPGA implementation on MNIST dataset and a x15 inference speed up with low power compared with ASIC design on DVSGesture dataset.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] The Shunt: An FPGA-Based Accelerator for Network Intrusion Prevention
    Weaver, Nicholas
    Paxson, Vern
    Gonzalez, Jose M.
    FPGA 2007: FIFTEENTH ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2007, : 199 - 206
  • [42] A low-latency LSTM accelerator using balanced sparsity based on FPGA
    Jiang, Jingfei
    Xiao, Tao
    Xu, Jinwei
    Wen, Dong
    Gao, Lei
    Dou, Yong
    MICROPROCESSORS AND MICROSYSTEMS, 2022, 89
  • [43] Low power FPGA-based image processing core for wireless capsule endoscopy
    Turcza, Pawel
    Duplaga, Mariusz
    SENSORS AND ACTUATORS A-PHYSICAL, 2011, 172 (02) : 552 - 560
  • [44] FPGA-based spiking neural network with hippocampal oscillation dynamics towards biologically meaningful prostheses
    Yang, Shuangming
    Wang, Jiang
    Deng, Bin
    Wei, Xile
    Li, Huiyan
    Wang, Tianxin
    2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 490 - 494
  • [45] Optimizing a FPGA-based Neural Accelerator for Small IoT Devices
    Hong, Seongmin
    Lee, Inho
    Park, Yongjun
    2018 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2018, : 176 - 177
  • [46] Composite FPGA-based Accelerator for Deep Convolutional Neural Networks
    HuanZhang
    YuanYang
    YangXiao
    2019 IEEE INTERNATIONAL CONFERENCE ON ELECTRON DEVICES AND SOLID-STATE CIRCUITS (EDSSC), 2019,
  • [47] An FPGA-based Accelerator Implementation for Deep Convolutional Neural Networks
    Zhou, Yongmei
    Jiang, Jingfei
    PROCEEDINGS OF 2015 4TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2015), 2015, : 829 - 832
  • [48] FPGA-based Low-Batch Training Accelerator for Modern CNNs Featuring High Bandwidth Memory
    Venkataramanaiah, Shreyas K.
    Suh, Han-Sok
    Yin, Shihui
    Nurvitadhi, Eriko
    Dasu, Aravind
    Cao, Yu
    Seo, Jae-Sun
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [49] FPGA-Based Vehicle Detection and Tracking Accelerator
    Zhai, Jiaqi
    Li, Bin
    Lv, Shunsen
    Zhou, Qinglei
    SENSORS, 2023, 23 (04)
  • [50] FPGA-based Acceleration of Neural Network Training
    Sang, Ruoyu
    Liu, Qiang
    Zhang, Qijun
    2016 IEEE MTT-S INTERNATIONAL CONFERENCE ON NUMERICAL ELECTROMAGNETIC AND MULTIPHYSICS MODELING AND OPTIMIZATION (NEMO), 2016,