FPGA based convolution and memory architecture for Convolutional Neural Network

被引:2
作者
Shahan, K. A. [1 ]
Rani, Sheeba J. [1 ]
机构
[1] Indian Inst Space Sci & Technol, Dept Avion, Thiruvananthapuram, Kerala, India
来源
2020 33RD INTERNATIONAL CONFERENCE ON VLSI DESIGN AND 2020 19TH INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS (VLSID) | 2020年
关键词
convolution; neural network; winograd efficient; hardware; architecture; deep convolutional neural network; memory reuse; FPGA;
D O I
10.1109/VLSID49098.2020.00049
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional Neural Networks(CNNs) are widely used in vision based applications to increase the performance but at the cost of higher storage and increase in computation. Hardware implementations of CNN are limited by the computational complexity and bandwidth while accessing off-chip memory. In this work a novel FPGA based hardware architecture for 2D convolution operation with reduced computational complexity using Winograd's 2D minimal filtering algorithm and a memory architecture to reduce on-chip read operations to access adjacent input data tiles for convolution operations is proposed to accelerate CNNs. An on-chip memory bank reuse architecture is also utilized to reduce the number of memory read and write operations to off-chip memory. The proposed architecture for convolution operation achieves lower computational complexity by reducing the number of multiplication operations without proportionate increase in number of addition operations compared to prior implementations. The number of data read operations from on-chip memory is reduced by 4 times and using the on-chip memory bank reuse scheme latency associated with accessing intermediate data is reduced. The implemented uses 16-bit fixed point representation which could reduce bit width to save area and energy. Virtex Ultra scale+ VCU118 Evaluation Board 2.0 populated with XCVU9P-L2FLGA2104 is used as the platform for implementing the design. VGG Net based CNN is used for the implementation. The computation time for individual convolutional layer is also estimated and it is found to be reduced. For a 3x3 kernel the number of multiplications is reduced to 4 from 9 compared to standard convolution operation and the number of addition operations reduced to 12 from 14 compared to prior hardware implementations of Winograd's 2D minimal filtering algorithm.
引用
收藏
页码:183 / 188
页数:6
相关论文
共 50 条
[41]   VHDL Generator for A High Performance Convolutional Neural Network FPGA-Based Accelerator [J].
Hamdan, Muhammad K. ;
Rover, Diane T. .
2017 INTERNATIONAL CONFERENCE ON RECONFIGURABLE COMPUTING AND FPGAS (RECONFIG), 2017,
[42]   Using Data Compression for Optimizing FPGA-Based Convolutional Neural Network Accelerators [J].
Guan, Yijin ;
Xu, Ningyi ;
Zhang, Chen ;
Yuan, Zhihang ;
Cong, Jason .
ADVANCED PARALLEL PROCESSING TECHNOLOGIES, 2017, 10561 :14-26
[43]   FPGA-Based Implementation of a Real-Time Object Recognition System Using Convolutional Neural Network [J].
Gilan, Ali Azarmi ;
Emad, Mohammad ;
Alizadeh, Bijan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2020, 67 (04) :755-759
[44]   All Binarized Convolutional Neural Network and Its implementation on an FPGA [J].
Shimoda, Masayuki ;
Sato, Shimpei ;
Nakahara, Hiroki .
2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, :291-294
[45]   Analysis & Design of Convolution Operator for High Speed and High Accuracy Convolutional Neural Network-Based Inference Engines [J].
Deepika, S. ;
Arunachalam, V .
IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (02) :390-396
[46]   A convolutional neural network accelerator on FPGA for crystallography spot screening [J].
Jiang, Yuwei ;
Feng, Yingqi ;
Ren, Tao ;
Zhu, Yongxin .
PROCEEDINGS OF THE 2024 IEEE 10TH IEEE INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE AND SMART COMPUTING, HPSC 2024, 2024, :66-70
[47]   An Automatic Instrument Recognition Approach Based on Deep Convolutional Neural Network [J].
Ke, Jiangyan ;
Lin, Rongchuan ;
Sharma, Ashutosh .
RECENT ADVANCES IN ELECTRICAL & ELECTRONIC ENGINEERING, 2021, 14 (06) :660-670
[48]   Unified Accelerator for Attention and Convolution in Inference Based on FPGA [J].
Li, Tianyang ;
Zhang, Fan ;
Fan, Xitian ;
Shen, Jianliang ;
Guo, Wei ;
Cao, Wei .
2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
[49]   FPGA Realization of a Neural Network based Motor Controller [J].
Diodati, Francesco ;
Jeppesen, Ben ;
Jervis, Mark ;
Saletti, Roberto .
2022 IEEE 27TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2022,
[50]   Digital Recognition Based on Neural Network and FPGA Implementation [J].
Zhang, Chaoyue ;
Wang, Yu ;
Guo, Jinxu ;
Zhang, Hao .
2017 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2017), VOL 1, 2017, :280-283