An Efficient FPGA-based Depthwise Separable Convolutional Neural Network Accelerator with Hardware Pruning

被引:4
|
作者
Liu, Zhengyan [1 ]
Liu, Qiang [1 ]
Yan, Shun [1 ]
Cheung, Ray C. C. [2 ]
机构
[1] Tianjin Univ, Sch Microelect, 92nd Rd, Tianjin 300072, Nankai, Peoples R China
[2] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
CNN accelerator; depthwise-seperable convolution; bottleneck; model compression;
D O I
10.1145/3615661
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks (CNNs) have been widely deployed in computer vision tasks. However, the computation and resource intensive characteristics of CNN bring obstacles to its application on embedded systems. This article proposes an efficient inference accelerator on Field Programmable Gate Array (FPGA) for CNNs with depthwise separable convolutions. To improve the accelerator efficiency, we make four contributions: (1) an efficient convolution engine with multiple strategies for exploiting parallelism and a configurable adder tree are designed to support three types of convolution operations; (2) a dedicated architecture combined with input buffers is designed for the bottleneck network structure to reduce data transmission time; (3) a hardware padding scheme to eliminate invalid padding operations is proposed; and (4) a hardware-assisted pruning method is developed to support online tradeoff between model accuracy and power consumption. Experimental results show that for MobileNetV2 the accelerator achieves 10x and 6x energy efficiency improvement over the CPU and GPU implementation, and 302.3 frames per second and 181.8 GOPS performance that is the best among several existing single-engine accelerators on FPGAs. The proposed hardware-assisted pruning method can effectively reduce 59.7% power consumption at the accuracy loss within 5%.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Implementation of Data-optimized FPGA-based Accelerator for Convolutional Neural Network
    Cho, Mannhee
    Kim, Youngmin
    2020 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2020,
  • [22] VHDL Generator for A High Performance Convolutional Neural Network FPGA-Based Accelerator
    Hamdan, Muhammad K.
    Rover, Diane T.
    2017 INTERNATIONAL CONFERENCE ON RECONFIGURABLE COMPUTING AND FPGAS (RECONFIG), 2017,
  • [23] An Energy-Efficient FPGA-based Convolutional Neural Network Implementation
    Irmak, Hasan
    Alachiotis, Nikolaos
    Ziener, Daniel
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [24] Efficient FPGA-Based Convolutional Neural Network Implementation for Edge Computing
    Cuong, Pham-Quoc
    Thinh, Tran Ngoc
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2023, 14 (03) : 479 - 487
  • [25] FPGA-Based Depth Separable Convolution Neural Network
    Lai, Yeong-Kang
    Hwang, Yu-Hao
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2020, : 741 - 742
  • [26] An FPGA-based binary neural network accelerator with enhanced hardware efficiency and data reuse
    Zhang, Dezheng
    Cen, Rui
    Pu, Han
    Wan, Rui
    Wang, Dong
    MICROELECTRONICS JOURNAL, 2025, 156
  • [27] An FPGA-Based Convolutional Neural Network Coprocessor
    Qiu, Changpei
    Wang, Xin'an
    Zhao, Tianxia
    Li, Qiuping
    Wang, Bo
    Wang, Hu
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [28] An Efficient Hardware Accelerator for Block Sparse Convolutional Neural Networks on FPGA
    Yin, Xiaodi
    Wu, Zhipeng
    Li, Dejian
    Shen, Chongfei
    Liu, Yu
    IEEE EMBEDDED SYSTEMS LETTERS, 2024, 16 (02) : 158 - 161
  • [29] FPGA-based Accelerator for Losslessly Quantized Convolutional Neural Networks
    Sit, Mankit
    Kazami, Ryosuke
    Amano, Hideharu
    2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, : 295 - 298
  • [30] FPGA-based Convolutional Neural Network Accelerator design using High Level Synthesize
    Ghaffari, Sina
    Sharifian, Saeed
    2016 2ND INTERNATIONAL CONFERENCE OF SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2016, : 29 - 34