FPGA-based implementation of deep neural network using stochastic computing

被引:16
作者
Nobari, Maedeh [1 ]
Jahanirad, Hadi [1 ]
机构
[1] Univ Kurdistan, Dept Elect & Commun Engn, Kurdistan, Iran
关键词
Artificial neural network (ANN); Multi -layer perceptron (MLP); Stochastic computing; Probability estimator (PE); Field programmable gate array (FPGA); COMPUTATION; ARCHITECTURE; PERCEPTRON;
D O I
10.1016/j.asoc.2023.110166
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A serious challenge in artificial real-time applications is the hardware implementation of deep neural networks (DNN). Among various methods, stochastic computing (SC)-based implementations received tremendous attention due to the low hardware overhead. However, the slow convergence rate is a major problem in SC-based neural networks' implementation, and millions of clock cycles are required to generate a relatively accurate output. The reconfigurability and parallel nature of field programmable gate array (FPGA) chips make them a preferable platform for SC-based DNN implementation. A fully or semi-parallel implementation of DNNs requires extensive hardware resources. In this paper, an efficient method for DNN implementation on an FPGA chip is presented to address these problems. The FPGA chip reconfiguration feature allows a DNN with several different neurons and topologies to be implemented on a single chip. Convergence time is significantly reduced by limiting the length of the stochastic bitstreams and establishing synchronization between the processing units based on precise timing. Furthermore, due to the limited number of input-output pins in the FPGA chip, a sequential architecture is proposed wherein the DNN inputs enter through only three 8-bit ports. This makes it possible to implement DNNs for image-processing applications. The proposed method is implemented using the Verilog hardware description language on the Xilinx FPGA Virtex-7 xc7v2000t chip. The results show a more than 82% reduction in hardware resources and the minimum rate of power consumption compared to state-of-the-art methods. In addition, the average error rate of the implemented DNN is reduced by 2%.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 68 条
[1]  
Abdelsalam A.M., 2018, 2018 International Symposium on Computers in Education, P1
[2]  
Alaghi A., 2015, P S THEOR MOD SIM DE, P1
[3]   Survey of Stochastic Computing [J].
Alaghi, Armin ;
Hayes, John P. .
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2013, 12
[4]   VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing [J].
Ardakani, Arash ;
Leduc-Primeau, Francois ;
Onizawa, Naoya ;
Hanyu, Takahiro ;
Gross, Warren J. .
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (10) :2688-2699
[5]  
Ardakani A, 2016, INT CONF ACOUST SPEE, P6540, DOI 10.1109/ICASSP.2016.7472937
[6]   Stochastic neural computation I: Computational elements [J].
Brown, BD ;
Card, HC .
IEEE TRANSACTIONS ON COMPUTERS, 2001, 50 (09) :891-905
[7]   Stochastic neural computation II: Soft competitive learning [J].
Brown, BD ;
Card, HC .
IEEE TRANSACTIONS ON COMPUTERS, 2001, 50 (09) :906-920
[8]   A New Stochastic Computing Methodology for Efficient Neural Network Implementation [J].
Canals, Vincent ;
Morro, Antoni ;
Oliver, Antoni ;
Alomar, Miquel L. ;
Rossello, Josep L. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (03) :551-564
[9]  
Carter W. S., 1986, Proceedings of the IEEE 1986 Custom Integrated Circuits Conference (Cat. No.86CH2258-2), P233
[10]   High-Accurate Stochastic Computing for Artificial Neural Network by Using Extended Stochastic Logic [J].
Chen, Kun-Chih ;
Wu, Chi-Hsun .
2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,