Compact and Computationally Efficient Representation of Deep Neural Networks

被引:53
作者
Wiedemann, Simon [1 ]
Mueller, Klaus-Robert [2 ,3 ,4 ]
Samek, Wojciech [1 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, D-10587 Berlin, Germany
[2] Tech Univ Berlin, D-10587 Berlin, Germany
[3] Max Planck Inst Informat, D-66123 Saarbrucken, Germany
[4] Korea Univ, Dept Brain & Cognit Engn, Seoul 136713, South Korea
关键词
Computationally efficient deep learning; data structures; lossless coding; neural network compression; sparse matrices;
D O I
10.1109/TNNLS.2019.2910073
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
At the core of any inference procedure, deep neural networks are dot product operations, which are the component that requires the highest computational resources. For instance, deep neural networks, such as VGG-16, require up to 15-G operations in order to perform the dot products present in a single forward pass, which results in significant energy consumption and thus limits their use in resource-limited environments, e.g., on embedded devices or smartphones. One common approach to reduce the complexity of the inference is to prune and quantize the weight matrices of the neural network. Usually, this results in matrices whose entropy values are low, as measured relative to the empirical probability mass distribution of its elements. In order to efficiently exploit such matrices, one usually relies on, inter alia, sparse matrix representations. However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements; therefore, cannot efficiently represent the entire set of matrices that exhibit low-entropy statistics (thus, the entire set of compressed neural network weight matrices). In this paper, we address this issue and present new efficient representations for matrices with low-entropy statistics. Alike sparse matrix data structures, these formats exploit the statistical properties of the data in order to reduce the size and execution complexity. Moreover, we show that the proposed data structures can not only be regarded as a generalization of sparse formats but are also more energy and time efficient under practically relevant assumptions. Finally, we test the storage requirements and execution performance of the proposed formats on compressed neural networks and compare them to dense and sparse representations. We experimentally show that we are able to attain up to x42 compression ratios, x5 speed ups, and x 90 energy savings when we lossless convert the state-of-the-art networks, such as AlexNet, VGG-16, ResNet152, and DenseNet, into the new data structures and benchmark their respective dot product.
引用
收藏
页码:772 / 785
页数:14
相关论文
共 57 条
[1]  
Afroz S, 2016, 2016 5TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS AND VISION (ICIEV), P151, DOI 10.1109/ICIEV.2016.7759985
[2]  
[Anonymous], 2016, BITWISE NEURAL NETWO
[3]  
[Anonymous], UNIVERSAL DEEP NEURA
[4]  
[Anonymous], 2015, NEUR DEEP LEARN REPR
[5]  
[Anonymous], 2017, ADV NEURAL INFORM PR
[6]  
[Anonymous], 2016, P MACHINE LEARNING R
[7]  
[Anonymous], 2016, XNOR NET IMAGENET CL
[8]  
[Anonymous], 2013, ADV NEURAL INFORM PR
[9]  
[Anonymous], 2015, Tiny ImageNet Visual Recognition Challenge., DOI DOI 10.1109/ICCV.2015.123
[10]  
[Anonymous], P IEEE INT JOINT C N