Grow-push-prune: Aligning deep discriminants for effective structural network compression

被引:7
作者
Tian, Qing [1 ,2 ]
Arbel, Tal [2 ]
Clark, James J. [2 ]
机构
[1] Bowling Green State Univ, Dept Comp Sci, Hayes Hall, Bowling Green, OH 43403 USA
[2] McGill Univ, Ctr Intelligent Machines, 3480 Univ St, Montreal, PQ H3A 0E9, Canada
基金
美国国家科学基金会;
关键词
Deep neural network pruning; Deep discriminant analysis; Deep representation learning; NEURAL-NETWORKS; ACCELERATION;
D O I
10.1016/j.cviu.2023.103682
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most of today's popular deep architectures are hand-engineered to be generalists. However, this design procedure usually leads to massive redundant, useless, or even harmful features for specific tasks. Unnecessarily high complexities render deep nets impractical for many real-world applications, especially those without powerful GPU support. In this paper, we attempt to derive task-dependent compact models from a deep discriminant analysis perspective. We propose an iterative and proactive approach for classification tasks which alternates between specialIntscript a pushing step, with an objective to simultaneously maximize class separation, penalize co-variances, and push deep discriminants into alignment with a compact set of neurons, and specialIntscript a pruning step, which discards less useful or even interfering neurons. Deconvolution is adopted to reverse 'unimportant' filters' effects and recover useful contributing sources. A simple network growing strategy based on the basic Inception module is proposed for challenging tasks requiring larger capacity than what the base net can offer. Experiments on the MNIST, CIFAR10, and ImageNet datasets demonstrate our approach's efficacy. On ImageNet, by pushing and pruning our grown Inception-88 model, we achieve more accurate models than Inception nets generated during growing, residual nets, and popular compact nets at similar sizes. We also show that our grown Inception nets (without hard-coded dimension alignment) clearly outperform residual nets of similar complexities.
引用
收藏
页数:14
相关论文
共 96 条
[21]  
He KM, 2015, Arxiv, DOI [arXiv:1512.03385, 10.48550/arXiv.1512.03385]
[22]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[23]   Bag of Tricks for Image Classification with Convolutional Neural Networks [J].
He, Tong ;
Zhang, Zhi ;
Zhang, Hang ;
Zhang, Zhongyue ;
Xie, Junyuan ;
Li, Mu .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :558-567
[24]   Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration [J].
He, Yang ;
Ding, Yuhang ;
Liu, Ping ;
Zhu, Linchao ;
Zhang, Hanwang ;
Yang, Yi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2006-2015
[25]   Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [J].
He, Yang ;
Liu, Ping ;
Wang, Ziwei ;
Hu, Zhilan ;
Yang, Yi .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4335-4344
[26]   AMC: AutoML for Model Compression and Acceleration on Mobile Devices [J].
He, Yihui ;
Lin, Ji ;
Liu, Zhijian ;
Wang, Hanrui ;
Li, Li-Jia ;
Han, Song .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :815-832
[27]   Channel Pruning for Accelerating Very Deep Neural Networks [J].
He, Yihui ;
Zhang, Xiangyu ;
Sun, Jian .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1398-1406
[28]  
Hinton G, 2015, Arxiv, DOI [arXiv:1503.02531, 10.48550/ARXIV.1503.02531]
[29]  
Horowitz M, 2014, ISSCC DIG TECH PAP I, V57, P10, DOI 10.1109/ISSCC.2014.6757323
[30]   Is uncorrelated linear discriminant analysis really a new method? [J].
Hou, S. ;
Riley, C. B. .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2015, 142 :49-53