A Resource-Efficient Scalable Spiking Neural Network Hardware Architecture With Reusable Modules and Memory Reutilization

被引:2
作者
Wang, Ran [1 ,2 ]
Zhang, Jian [3 ,4 ]
Wang, Tengbo [5 ]
Liu, Jia [6 ]
Zhang, Guohe [1 ,2 ]
机构
[1] Xi An Jiao Tong Univ, Sch Microelect, Xian 710049, Shaanxi, Peoples R China
[2] Xi An Jiao Tong Univ, Key Lab Micronano Elect & Syst Integrat Xian City, Xian 710049, Shaanxi, Peoples R China
[3] Peking Univ, Sch Integrated Circuits, Beijing 100871, Peoples R China
[4] Beijing Microelect Technol Inst, Beijing 100076, Peoples R China
[5] Xi An Jiao Tong Univ, Sch Elect & Informat Engn, Xian 710049, Shaanxi, Peoples R China
[6] China Elect Technol Grp Corp, Inst 24, Chongqing 400060, Peoples R China
关键词
Convolution; Neurons; Computer architecture; Biological neural networks; Membrane potentials; Kernel; Hardware; Neuromorphic computing; spiking convolutional neural networks (SCNNs); image classification; energy efficiency; field programmable gate array (FPGA); PROCESSOR;
D O I
10.1109/TCSII.2023.3301180
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this brief, a cost-efficient and scalable spiking convolution neural network architecture is proposed. Reusable modules are designed to reduce hardware resource consumption by leveraging the structural similarity of each convolutional layer and pooling layer in spiking convolutional neural network (SCNN). Taking advantage of time-driven neural processing, memory reutilization is applied to reduce the memory for storing neural states. Benefit from these reusable modules, the proposed architecture demonstrates outstanding scalability. Two SCNN structures (named SCNN I and SCNN II) with different complexities and scales are designed to handle different classification tasks. The experiments are conducted using Xilinx Kintex-7 FPGA, operating at a clock frequency of 100 MHz. SCNN I and SCNN II achieve accuracy results of 98.15% and 85.71% respectively on the MNIST and CIFAR-10 datasets. Additionally, the energy consumption for each classification task is recorded as 0.015mJ and 0.724mJ. These results highlight the suitability of the proposed architecture for deployment in resource-constrained real-time edge processing scenarios.
引用
收藏
页码:430 / 434
页数:5
相关论文
共 21 条
[21]   A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps [J].
Zhang, Ling ;
Yang, Jing ;
Shi, Cong ;
Lin, Yingcheng ;
He, Wei ;
Zhou, Xichuan ;
Yang, Xu ;
Liu, Liyuan ;
Wu, Nanjian .
SENSORS, 2021, 21 (18)