Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning

被引:112
作者
Lin, Shaohui [1 ]
Ji, Rongrong [1 ,2 ]
Li, Yuchao [1 ]
Deng, Cheng [3 ]
Li, Xuelong [4 ,5 ]
机构
[1] Xiamen Univ, Sch Informat Sci & Engn, Fujian Key Lab Sensing & Comp Smart City, Xiamen 361005, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[4] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[5] Northwestern Polytech Univ, Ctr OPT IMagery Anal & Learning OPTIMAL, Xian 710072, Peoples R China
基金
中国博士后科学基金; 国家重点研发计划;
关键词
Convolutional neural networks (CNNs); CNN acceleration; CNN compression; structured sparsity; NEURAL-NETWORKS;
D O I
10.1109/TNNLS.2019.2906563
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of convolutional neural networks (CNNs) in computer vision applications has been accompanied by a significant increase of computation and memory costs, which prohibits their usage on resource-limited environments, such as mobile systems or embedded devices. To this end, the research of CNN compression has recently become emerging. In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speed up the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries. Concretely, the proposed scheme incorporates two different regularizers of structured sparsity into the original objective function of filter pruning, which fully coordinates the global output and local pruning operations to adaptively prune filters. We further propose an alternative updating with Lagrange multipliers (AULM) scheme to efficiently solve its optimization. AULM follows the principle of alternating direction method of multipliers (ADMM) and alternates between promoting the structured sparsity of CNNs and optimizing the recognition loss, which leads to a very efficient solver (2.5x to the most recent work that directly solves the group sparsity-based regularization). Moreover, by imposing the structured sparsity, the online inference is extremely memory-light since the number of filters and the output feature maps are simultaneously reduced. The proposed scheme has been deployed to a variety of state-of-the-art CNN structures, including LeNet, AlexNet, VGGNet, ResNet, and GoogLeNet, over different data sets. Quantitative results demonstrate that the proposed scheme achieves superior performance over the state-of-the-art methods. We further demonstrate the proposed compression scheme for the task of transfer learning, including domain adaptation and object detection, which also show exciting performance gains over the state-of-the-art filter pruning methods.
引用
收藏
页码:574 / 588
页数:15
相关论文
共 71 条
[1]  
Abadi M., 2016, TENSORFLOW LARGE SCA
[2]  
[Anonymous], 2016, P IJCAI
[3]  
[Anonymous], 2006, Journal of the Royal Statistical Society, Series B
[4]  
[Anonymous], 2014, A field guide to forward-backward splitting with a FASTA implementation
[5]  
[Anonymous], 2014, COMPRESSING DEEP CON
[6]  
Anwar SajidKyuyeon Hwang Wonyong Sung., 2015, Structured pruning of deep convolutional neural networks
[7]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[8]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[9]   Deep Manifold Learning Combined With Convolutional Neural Networks for Action Recognition [J].
Chen, Xin ;
Weng, Jian ;
Lu, Wei ;
Xu, Jiaming ;
Weng, Jiasi .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (09) :3938-3952
[10]   Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks [J].
Cheng, Jian ;
Wu, Jiaxiang ;
Leng, Cong ;
Wang, Yuhang ;
Hu, Qinghao .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) :4730-4743