Improving the Performance of OpenCL-based FPGA Accelerator for Convolutional Neural Network

被引:142
作者
Zhang, Jialiang [1 ]
Li, Jing [1 ]
机构
[1] Univ Wisconsin, Dept Elect & Comp Engn, Madison, WI 53706 USA
来源
FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS | 2017年
关键词
D O I
10.1145/3020078.3021698
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
OpenCL FPGA has recently gained great popularity with emerging needs for workload acceleration such as Convolutional Neural Network (CNN), which is the most popular deep learning architecture in the domain of computer vision. While OpenCL enhances the code portability and programmability of FPGA, it comes at the expense of performance. The key challenge is to optimize the OpenCL kernels to efficiently utilize the flexible hardware resources in FPGA. Simply optimizing the OpenCL kernel code through various compiler options turns out insufficient to achieve desirable performance for both compute-intensive and data-intensive workloads such as convolutional neural networks. In this paper, we first propose an analytical performance model and apply it to perform an in-depth analysis on the resource requirement of CNN classifier kernels and available resources on modern FPGAs. We identify that the key performance bottleneck is the on-chip memory bandwidth. We propose a new kernel design to effectively address such bandwidth limitation and to provide an optimal balance between computation, on-chip, and off-chip memory access. As a case study, we further apply these techniques to design a CNN accelerator based on the VGG model. Finally, we evaluate the performance of our CNN accelerator using an Altera Arria 10 GX1150 board. We achieve 866 Gop/s floating point performance at 370MHz working frequency and 1:79 Top/s 16-bit fixed-point performance at 385MHz. To the best of our knowledge, our implementation achieves the best power efficiency and performance density compared to existing work.
引用
收藏
页码:25 / 34
页数:10
相关论文
共 16 条
  • [1] [Anonymous], 2015, Very Deep Convolu- tional Networks for Large-Scale Image Recognition
  • [2] [Anonymous], FPGACONVNET FRAMEWOR
  • [3] [Anonymous], 2016, P 2016 ACM SIGDA INT
  • [4] [Anonymous], IEEE T VERY LARGE SC
  • [5] [Anonymous], ALT SDK OP BEST PRAC
  • [6] [Anonymous], ACM SIGARCH COMPUTER
  • [7] [Anonymous], 2016, P 2016 ACM SIGDA INT
  • [8] [Anonymous], IEEE T PATTERN ANAL
  • [9] [Anonymous], P 2015 ACM SIGDA INT
  • [10] [Anonymous], ARXIV140850