Make l1 regularization effective in training sparse CNN

被引:0
作者
He, Juncai [1 ]
Jia, Xiaodong [2 ]
Xu, Jinchao [1 ]
Zhang, Lian [1 ]
Zhao, Liang [3 ,4 ]
机构
[1] Penn State Univ, Dept Math, University Pk, PA 16802 USA
[2] Penn State Univ, Dept Comp Sci & Engn, University Pk, PA 16802 USA
[3] Chinese Acad Sci, State Key Lab Sci & Engn Comp, Acad Math & Syst Sci, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
关键词
Sparse optimization; l(1) regularization; Dual averaging; CNN; ONLINE; SUM;
D O I
10.1007/s10589-020-00202-1
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Compressed Sensing using l(1) regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. Following Xiao (J Mach Learn Res 11(Oct):2543-2596, 2010), We first demonstrate that the commonly used stochastic gradient decent and variants training algorithm is not an appropriate match with l(1) regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. The RDA method of Xiao (J Mach Learn Res 11(Oct):2543-2596, 2010) was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with l(1) regularization to achieve a state-of-the-art sparsity for the highly non-convex CNN compared to other weight pruning methods without compromising accuracy (achieving 95% sparsity for ResNet-18 on CIFAR-10, for example).
引用
收藏
页码:163 / 182
页数:20
相关论文
共 39 条
[1]  
[Anonymous], 2016, CORR
[2]  
[Anonymous], 2017, CoRR
[3]   Incremental proximal methods for large scale convex optimization [J].
Bertsekas, Dimitri P. .
MATHEMATICAL PROGRAMMING, 2011, 129 (02) :163-195
[4]  
Rossini MB, 2016, PROP INTELECT, P227
[5]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[6]  
Cheng Y., 2017, A survey of model compression and acceleration for deep neural networks
[7]  
Cun Y. L., 1990, P ADV NEUR INF PROC, P598
[8]   Compressed sensing [J].
Donoho, DL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (04) :1289-1306
[9]  
Duchi J, 2009, J MACH LEARN RES, V10, P2899
[10]  
Eldar Y. C., 2012, Compressed Sensing: Theory and Applications