Shakeout: A New Approach to Regularized Deep Neural Network Training

被引:36
作者
Kang, Guoliang [1 ]
Li, Jun [1 ]
Tao, Dacheng [2 ,3 ]
机构
[1] Univ Technol Sydney, Fac Engn & Informat Technol, Ctr AI, Ultimo, NSW, Australia
[2] Univ Sydney, UBTech Sydney Artificial Intelligence Inst, Darlington, NSW 2008, Australia
[3] Univ Sydney, Sch Informat Technol, Fac Engn & Informat Technol, Darlington, NSW 2008, Australia
基金
澳大利亚研究理事会;
关键词
Shakeout; dropout; regularization; sparsity; deep neural network; DROPOUT; SELECTION;
D O I
10.1109/TPAMI.2017.2701831
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines L-0, L-1 and L-2 regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.
引用
收藏
页码:1245 / 1258
页数:14
相关论文
共 65 条
[1]  
[Anonymous], 2013, Advances in neural information processing systems
[2]  
[Anonymous], 2008, Advances in neural information processing systems
[3]  
[Anonymous], 2013, ARXIV13126197
[4]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[5]  
[Anonymous], 2008, ICML 08, DOI 10.1145/1390156.1390294
[6]  
[Anonymous], 2013, Advances in neural information processing systems
[7]  
[Anonymous], 2012, COMPUTER SCI
[8]  
[Anonymous], 2011, ARXIV PREPRINT ARXIV
[9]  
[Anonymous], 2013, Advances in Neural Information Processing Systems, DOI DOI 10.48550/ARXIV.1307.1493
[10]  
[Anonymous], 1995, Advances in Neural Information Processing Systems