Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic

被引:4
作者
Blott, Michaela [1 ]
Preusser, Thomas B. [1 ]
Fraser, Nicholas [1 ]
Gambardella, Giulio [1 ]
O'Brien, Kenneth [1 ]
Umuroglu, Yaman [2 ]
Leeser, Miriam [3 ]
机构
[1] Xilinx Res Labs, Dublin, Ireland
[2] Norwegian Univ Sci & Technol, Trondheim, Norway
[3] Northeastern Univ, Boston, MA 02115 USA
来源
2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD) | 2017年
关键词
D O I
10.1109/ICCD.2017.73
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional Neural Networks have dramatically improved in recent years, surpassing human accuracy on certain problems and performance exceeding that of traditional computer vision algorithms. While the compute pattern in itself is relatively simple, significant compute and memory challenges remain as CNNs may contain millions of floating-point parameters and require billions of floating-point operations to process a single image. These computational requirements, combined with storage footprints that exceed typical cache sizes, pose a significant performance and power challenge for modern compute architectures. One of the promising opportunities to scale performance and power efficiency is leveraging reduced precision representations for all activations and weights as this allows to scale compute capabilities, reduce weight and feature map buffering requirements as well as energy consumption. While a small reduction in accuracy is encountered, these Quantized Neural Networks have been shown to achieve state-of-the-art accuracy on standard benchmark datasets, such as MNIST, CIFAR-10, SVHN and even ImageNet, and thus provide highly attractive design trade-offs. Current research has focused mainly on the implementation of extreme variants with full binarization of weights and or activations, as well typically smaller input images. Within this paper, we investigate the scalability of dataflow architectures with respect to supporting various precisions for both weights and activations, larger image dimensions, and increasing numbers of feature map channels. Key contributions are a formalized approach to understanding the scalability of the existing hardware architecture with cost models and a performance prediction as a function of the target device size. We provide validating experimental results for an ImageNet classification on a server-class platform, namely the AWS F1 node.
引用
收藏
页码:419 / 422
页数:4
相关论文
共 9 条
[1]  
[Anonymous], 2017, CORR
[2]  
[Anonymous], 2017, P 8 WORKSH 6 WORKSH, DOI DOI 10.1145/3029580.3029586
[3]  
[Anonymous], 2016, ABS160202830 CORR
[4]  
[Anonymous], 2016, CORR
[5]  
[Anonymous], 2016, ECCV
[6]  
Horowitz M, 2014, ISSCC DIG TECH PAP I, V57, P10, DOI 10.1109/ISSCC.2014.6757323
[7]  
Preusser T. B., 2017, FIELD PROGRAMMABLE L
[8]   FINN: A Framework for Fast, Scalable Binarized Neural Network Inference [J].
Umuroglu, Yaman ;
Fraser, Nicholas J. ;
Gambardella, Giulio ;
Blott, Michaela ;
Leong, Philip ;
Jahre, Magnus ;
Vissers, Kees .
FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2017, :65-74
[9]   Roofline: An Insightful Visual Performance Model for Multicore Architectures [J].
Williams, Samuel ;
Waterman, Andrew ;
Patterson, David .
COMMUNICATIONS OF THE ACM, 2009, 52 (04) :65-76