Exploring the Granularity of Sparsity in Convolutional Neural Networks

被引:303
作者
Mao, Huizi [1 ]
Han, Song [1 ]
Pool, Jeff [2 ]
Li, Wenshuo [3 ]
Liu, Xingyu [1 ]
Wang, Yu [3 ]
Dally, William J. [1 ,2 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] NVIDIA, Santa Clara, CA USA
[3] Tsinghua Univ, Beijing, Peoples R China
来源
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW) | 2017年
关键词
D O I
10.1109/CVPRW.2017.241
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarse-grained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% - 35% of memory references compared with fine-grained sparsity.
引用
收藏
页码:1927 / 1934
页数:8
相关论文
共 27 条
[1]  
[Anonymous], 44 INT S COMP ARCH
[2]  
[Anonymous], 2015, ARXIV151000149
[3]  
[Anonymous], 2016, ARXIV161106473
[4]  
[Anonymous], PROC CVPR IEEE
[5]  
[Anonymous], 2016, INT C LEARNING REPRE
[6]  
[Anonymous], 2011, PROC DEEP LEARN UNS
[7]  
[Anonymous], 2017, IEEE C COMPUTER VISI, DOI DOI 10.1109/CVPR.2017.243
[8]  
[Anonymous], 2016, IEEE J SOLID STATE C
[9]  
[Anonymous], 2015, INT C LEARNING REPRE
[10]  
[Anonymous], 2017, ICLR