FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

被引:655
作者
Umuroglu, Yaman [1 ,2 ]
Fraser, Nicholas J. [1 ,3 ]
Gambardella, Giulio [1 ]
Blott, Michaela [1 ]
Leong, Philip [3 ]
Jahre, Magnus [2 ]
Vissers, Kees [1 ]
机构
[1] Xilinx Res Labs, San Jose, CA 95124 USA
[2] Norwegian Univ Sci & Technol, Trondheim, Norway
[3] Univ Sydney, Sydney, NSW, Australia
来源
FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS | 2017年
关键词
D O I
10.1145/3020078.3021744
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values. In this paper, we present Finn, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture. By utilizing a novel set of optimizations that enable efficient mapping of binarized neural networks to hardware, we implement fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. On a ZC706 embedded FPGA platform drawing less than 25 W total system power, we demonstrate up to 12.3 million image classifications per second with 0.31 mu s latency on the MNIST dataset with 95.8% accuracy, and 21906 image classifications per second with 283 mu s latency on the CIFAR-10 and SVHN datasets with respectively 80.1% and 94.9% accuracy. To the best of our knowledge, ours are the fastest classification rates reported to date on these benchmarks.
引用
收藏
页码:65 / 74
页数:10
相关论文
共 33 条
  • [1] [Anonymous], P ICFPT
  • [2] [Anonymous], CORR
  • [3] [Anonymous], CORR
  • [4] [Anonymous], CORR
  • [5] [Anonymous], 2016, CORR
  • [6] [Anonymous], 2015, CORR
  • [7] [Anonymous], 2016, CORR
  • [8] [Anonymous], 2016, ABS160202830 CORR
  • [9] [Anonymous], 2016, ECCV
  • [10] [Anonymous], P CASES