Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer

被引:5
作者
Lee, Seunghyun [1 ]
Song, Byung Cheol [1 ,2 ]
机构
[1] Inha Univ, Dept Elect Engn, Incheon 22212, South Korea
[2] Korea Adv Inst Sci & Technol, Elect Engn, Daejeon, South Korea
基金
新加坡国家研究基金会;
关键词
Costs; Degradation; Knowledge transfer; Knowledge engineering; Computational efficiency; Convolutional neural networks; Training; Deep neural network; filter pruning; knowledge transfer (KT); smaller network;
D O I
10.1109/TNNLS.2023.3236336
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning is the most representative technique for lightweighting convolutional neural networks (CNNs). In general, filter pruning consists of the pruning and fine-tuning phases, and both still require a considerable computational cost. So, to increase the usability of CNNs, filter pruning itself needs to be lightweighted. For this purpose, we propose a coarse-to-fine neural architecture search (NAS) algorithm and a fine-tuning structure based on contrastive knowledge transfer (CKT). First, candidates of subnetworks are coarsely searched by a filter importance scoring (FIS) technique, and then the best subnetwork is obtained by a fine search based on NAS-based pruning. The proposed pruning algorithm does not require a supernet and adopts a computationally efficient search process, so it can create a pruned network with higher performance at a lower cost than the existing NAS-based search algorithms. Next, a memory bank is configured to store the information of interim subnetworks, i.e., by-products of the above-mentioned subnetwork search phase. Finally, the fine-tuning phase delivers the information of the memory bank through a CKT algorithm. Thanks to the proposed fine-tuning algorithm, the pruned network accomplishes high performance and fast convergence speed because it can take clear guidance from the memory bank. Experiments on various datasets and models prove that the proposed method has a significant speed efficiency with reasonable performance leakage over the state-of-the-art (SOTA) models. For example, the proposed method pruned the ResNet-50 trained on Imagenet-2012 up to 40.01% with no accuracy loss. Also, since the computational cost amounts to only 210 GPU hours, the proposed method is computationally more efficient than SOTA techniques. The source code is publicly available at https://github.com/sseung0703/FFP.
引用
收藏
页码:9674 / 9685
页数:12
相关论文
共 63 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank [J].
Alonso, Inigo ;
Sabater, Alberto ;
Ferstl, David ;
Montesano, Luis ;
Murillo, Ana C. .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :8199-8208
[3]   Wasserstein Contrastive Representation Distillation [J].
Chen, Liqun ;
Wang, Dong ;
Gan, Zhe ;
Liu, Jingjing ;
Henao, Ricardo ;
Carin, Lawrence .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :16291-16300
[4]  
Chen T, 2020, Arxiv, DOI arXiv:2002.05709
[5]  
Chen XL, 2020, Arxiv, DOI arXiv:2011.10566
[6]  
Denil M., 2013, Advances in Neural Information Processing Systems, V26
[7]  
Ding Xiaohan, 2019, PR MACH LEARN RES, V97
[8]  
Dong XY, 2019, ADV NEUR IN, V32
[9]  
Fang Gongfan, 2021, arXiv
[10]  
Frankle J., 2019, ICLR