Laius: An 8-bit Fixed-point CNN Hardware Inference Engine

被引:34
作者
Li, Zhisheng [1 ]
Wang, Lei [1 ]
Guo, Shasha [1 ]
Deng, Yu [1 ]
Dou, Qiang [1 ]
Zhou, Haifang [1 ]
Lu, Wenyuan [2 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci, Changsha, Hunan, Peoples R China
[2] Xian Satellite Monitoring & Control Ctr, Xian, Shaanxi, Peoples R China
来源
2017 15TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS AND 2017 16TH IEEE INTERNATIONAL CONFERENCE ON UBIQUITOUS COMPUTING AND COMMUNICATIONS (ISPA/IUCC 2017) | 2017年
关键词
CNN accelerator; FPGA; LeNet; Inference; Implementation;
D O I
10.1109/ISPA/IUCC.2017.00030
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional Neural Network (CNN) is one of the most effective neural network model for many classification tasks, such as voice recognition, computer vision and biological information processing. Unfortunately, Computation of CNN is both memory-intensive and computation-intensive, which brings a huge challenge to the design of the hardware accelerators. A large number of hardware accelerators for CNN inference are designed by the industry and the academia. Most of the engines are based on 32-bit floating point matrix multiplication, where the data precision is over-provisioned for inference job and the hardware cost are too high. In this paper, a 8-bit fixed-point LeNet inference engine (Laius) is designed and implemented on FPGA. In order to reduce the consumption of FPGA resource, we proposed a methodology to find the optimal bit-length for weight and bias in LeNet, which results in using 8-bit fixed point for most of the computation and using 16-bit fixed point for other computation. The PE (Processing Element) design is proposed. Pipelining and PE tiling technique is use to improve the performance of the inference engine. By theoretical analysis, we came to the conclusion that DSP resource in FPGA is the most critical resource, it should be carefully used during the design process. We implement the inference engine on Xilinx 485t FPGA. Experiment result shows that the designed LeNet inference engine can achieve 44.9 Gops throughput with 8-bit fixed-point operation after pipelining. Moreover, with only 1% loss of accuracy, the 8-bit fixed-point engine largely reduce 31.43% in latency, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared to a 32-bit fixed-point inference engine with the same structure.
引用
收藏
页码:143 / 150
页数:8
相关论文
共 20 条
  • [1] [Anonymous], 2016, MICRO
  • [2] [Anonymous], INT C FIELD PROGR LO
  • [3] [Anonymous], 2016, PIPECNN OPENCL BASED
  • [4] [Anonymous], 2016, IEEE T COMPUT, DOI DOI 10.1109/TC.2015.2417542
  • [5] DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning
    Chen, Tianshi
    Du, Zidong
    Sun, Ninghui
    Wang, Jia
    Wu, Chengyong
    Chen, Yunji
    Temam, Olivier
    [J]. ACM SIGPLAN NOTICES, 2014, 49 (04) : 269 - 283
  • [6] Chen Y. II., 2016, IEEE MICRO, VPP, P1
  • [7] ShiDianNao: Shifting Vision Processing Closer to the Sensor
    Du, Zidong
    Fasthuber, Robert
    Chen, Tianshi
    Ienne, Paolo
    Li, Ling
    Luo, Tao
    Feng, Xiaobing
    Chen, Yunji
    Temam, Olivier
    [J]. 2015 ACM/IEEE 42ND ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2015, : 92 - 104
  • [8] CNP: AN FPGA-BASED PROCESSOR FOR CONVOLUTIONAL NETWORKS
    Farabet, Clement
    Poulet, Cyril
    Han, Jefferson Y.
    LeCun, Yann
    [J]. FPL: 2009 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS, 2009, : 32 - +
  • [9] Guo S, 2017, FIXCAFFE TRAINING CN, P38
  • [10] Gupta S., 2015, COMPUTER SCI