Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing

被引:570
作者
Albericio, Jorge [1 ]
Judd, Patrick [1 ]
Hetherington, Tayler [2 ]
Aamodt, Tor [2 ]
Jerger, Natalie Enright [1 ]
Moshovos, Andreas [1 ]
机构
[1] Univ Toronto, Toronto, ON M5S 1A1, Canada
[2] Univ British Columbia, Vancouver, BC V5Z 1M9, Canada
来源
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA) | 2016年
关键词
MATRIX VECTOR MULTIPLICATION;
D O I
10.1109/ISCA.2016.11
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24x to 1.55x and by 1.37x on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and (EDP)-P-2 (Energy Delay Squared Product) on average by 1.47x and 2.01x, respectively. The average performance improvements increase to 1.52x without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 44 条
[1]  
[Anonymous], 2014, ARXIV14090575CS
[2]  
[Anonymous], 2014, CORR
[3]  
[Anonymous], ARXIV151105236V4CSLG
[4]  
[Anonymous], 2016, CORR
[5]  
[Anonymous], 2014, CORR
[6]  
[Anonymous], 2014, P 19 INT C ARCH SUPP
[7]  
[Anonymous], COMPUTER ARCHITECTUR
[8]  
[Anonymous], TECH REP
[9]  
[Anonymous], 2015, CORR
[10]  
[Anonymous], 2015, Int. J. Comput. Vis., DOI DOI 10.1007/S11263-015-0816-Y