Maximizing CNN Accelerator Efficiency Through Resource Partitioning

被引:234
作者
Shen, Yongming [1 ]
Ferdman, Michael [1 ]
Milder, Peter [1 ]
机构
[1] SUNY Stony Brook, Stony Brook, NY 11794 USA
来源
44TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2017) | 2017年
基金
美国国家科学基金会;
关键词
Convolutional Neural Network; FPGA; Accelerator; COPROCESSOR;
D O I
10.1145/3079856.3080221
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are revolutionizing machine learning, but they present significant computational challenges. Recently, many FPGA-based accelerators have been proposed to improve the performance and efficiency of CNNs. Current approaches construct a single processor that computes the CNN layers one at a time; the processor is optimized to maximize the throughput at which the collection of layers is computed. However, this approach leads to inefficient designs because the same processor structure is used to compute CNN layers of radically varying dimensions. We present a new CNN accelerator paradigm and an accompanying automated design methodology that partitions the available FPGA resources into multiple processors, each of which is tailored for a different subset of the CNN convolutional layers. Using the same FPGA resources as a single large processor, multiple smaller specialized processors increase computational efficiency and lead to a higher overall throughput. Our design methodology achieves 3.8x higher throughput than the state-of-the-art approach on evaluating the popular AlexNet CNN on a Xilinx Virtex-7 FPGA. For the more recent SqueezeNet and GoogLeNet, the speedups are 2.2x and 2.0x.
引用
收藏
页码:535 / 547
页数:13
相关论文
共 32 条
[1]   Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing [J].
Albericio, Jorge ;
Judd, Patrick ;
Hetherington, Tayler ;
Aamodt, Tor ;
Jerger, Natalie Enright ;
Moshovos, Andreas .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :1-13
[2]  
[Anonymous], 2011, CVPR 2011 WORKSH
[3]  
[Anonymous], 2017, P 25 IEEE INT S FIEL
[4]  
[Anonymous], 2016, MICRO
[5]  
[Anonymous], 2016, P 2016 INT C SUP ICS
[6]  
[Anonymous], P DAC
[7]  
[Anonymous], 2016, 7 SER FPGAS MEM RES
[8]  
Chakradhar S, 2010, CONF PROC INT SYMP C, P247, DOI 10.1145/1816038.1815993
[9]   DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning [J].
Chen, Tianshi ;
Du, Zidong ;
Sun, Ninghui ;
Wang, Jia ;
Wu, Chengyong ;
Chen, Yunji ;
Temam, Olivier .
ACM SIGPLAN NOTICES, 2014, 49 (04) :269-283
[10]  
Chen Y., 2014, J DAIRY SCI, V10, P2014, DOI DOI 10.1109/MICR0.2014.58