PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices

被引:75
作者
Deng, Chunhua [1 ,4 ]
Liao, Siyu [1 ,4 ]
Xie, Yi [1 ,4 ]
Parhi, Keshab K. [2 ]
Qian, Xuehai [3 ]
Yuan, Bo [1 ,4 ]
机构
[1] CUNY, New York, NY 10021 USA
[2] Univ Minnesota, St Paul, MN USA
[3] Univ Southern Calif, Los Angeles, CA USA
[4] Rutgers State Univ, Piscataway, NJ 08855 USA
来源
2018 51ST ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO) | 2018年
基金
美国国家科学基金会;
关键词
Deep Learning; Model Compression; VLSI;
D O I
10.1109/MICRO.2018.00024
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CIRCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PERMDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PERMDNN eliminates the drawbacks of indexing overhead, non heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PERMDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PERMDNN architecture, a multi-processing element (PE) fully connected (FC) layer-targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PERMDNN achieves 3.3x similar to 4.8 x higher throughout, 5.9 x similar to 8.5 x better area efficiency and 2.8 x similar to 4.0 x better energy efficiency on different workloads. Compared with CIRCNN, PERMDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.
引用
收藏
页码:189 / 202
页数:14
相关论文
共 75 条
[1]  
Albericio J., 2016, ACM SIGARCH COMPUTER, V44
[2]   Bit-Pragmatic Deep Neural Network Computing [J].
Albericio, Jorge ;
Delmas, Alberto ;
Judd, Patrick ;
Sharify, Sayeh ;
O'Leary, Gerard ;
Genov, Roman ;
Moshovos, Andreas .
50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2017, :382-394
[3]  
Alwani M., 2016, MICR MICRO 2016 49 A
[4]  
[Anonymous], 2016, Advances
[5]  
[Anonymous], 2016, On the expressive power of deep neural networks
[6]  
[Anonymous], AC SPEECH SIGN PROC
[7]  
[Anonymous], 2017, I C FIELD PROG LOGIC
[8]  
[Anonymous], 2012, Structured Matrices and Polynomials: Unified Superfast Algorithms
[9]  
[Anonymous], 2015, ICCV
[10]  
[Anonymous], 2016, Lecture Notes in Computer Science, DOI [10.1007/978-3-319-46493-0_38, DOI 10.1007/978-3-319-46493-0_38]