Iterative filter pruning with combined feature maps and knowledge distillation

被引:0
作者
Liu, Yajun [1 ]
Fan, Kefeng [2 ]
Zhou, Wenju [1 ]
机构
[1] Shanghai Univ, Sch Mechatron Engn & Automat, Shanghai 200444, Peoples R China
[2] China Elect Standardizat Inst, Beijing 100007, Peoples R China
基金
中国国家自然科学基金;
关键词
Filter pruning; Information capacity; Feature relevance; Knowledge distillation;
D O I
10.1007/s13042-024-02371-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have been successfully implemented in various computer vision tasks. However, the remarkable achievements are accompanied by high memory and high computation, which hinder the deployment and application of CNNs on resource-constrained mobile devices. Filter pruning is proposed as an effective method to solve the above problems. In this paper, we propose an iterative filter pruning method that combines feature map properties and knowledge distillation. This method can maximize the important feature information (e.g., spatial features) in the feature map by calculating the information capacity and feature relevance of the feature map, and then pruning based on the set criteria. Then, the pruned network learns the complete feature information of the standard CNN architecture in order to quickly and completely recover the lost accuracy before the next pruning operation. The alternating operation of pruning and knowledge distillation can effectively and comprehensively achieve network compression. Experiments on image classification datasets via mainstream CNN architectures indicate the effectiveness of our approach. For example, on CIFAR-10, our method reduces Floating Point Operations (FLOPs) by 71.8% and parameters by 71.0% with an accuracy improvement of 0.24% over the ResNet-110 benchmark. On ImageNet, our method achieves 55.6% reduction in FLOPs and 52.5% reduction in model memory at the cost of losing only 0.17% of Top-5 on ResNet-50.
引用
收藏
页码:1955 / 1969
页数:15
相关论文
共 60 条
[1]  
Boski M, 2017, 2017 10TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
[2]   Iterative clustering pruning for convolutional neural networks [J].
Chang, Jingfei ;
Lu, Yang ;
Xue, Ping ;
Xu, Yiqun ;
Wei, Zhen .
KNOWLEDGE-BASED SYSTEMS, 2023, 265
[3]  
Chen HT, 2020, AAAI CONF ARTIF INTE, V34, P3585
[4]   Fpar: filter pruning via attention and rank enhancement for deep convolutional neural networks acceleration [J].
Chen, Yanming ;
Wu, Gang ;
Shuai, Mingrui ;
Lou, Shubin ;
Zhang, Yiwen ;
An, Zhulin .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (07) :2973-2985
[5]  
Chen ZS, 2020, PROC CVPR IEEE, P10655, DOI 10.1109/CVPR42600.2020.01067
[6]   Joint structured pruning and dense knowledge distillation for efficient transformer model compression [J].
Cui, Baiyun ;
Li, Yingming ;
Zhang, Zhongfei .
NEUROCOMPUTING, 2021, 458 :56-69
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]   A Dual Rank-Constrained Filter Pruning Approach for Convolutional Neural Networks [J].
Fan, Fugui ;
Su, Yuting ;
Jing, Peiguang ;
Lu, Wei .
IEEE SIGNAL PROCESSING LETTERS, 2021, 28 (28) :1734-1738
[9]   An Automatically Layer-Wise Searching Strategy for Channel Pruning Based on Task-Driven Sparsity Optimization [J].
Feng, Kai-Yuan ;
Fei, Xia ;
Gong, Maoguo ;
Qin, A. K. ;
Li, Hao ;
Wu, Yue .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) :5790-5802
[10]   Multi-Dimensional Pruning: A Unified Framework for Model Compression [J].
Guo, Jinyang ;
Ouyang, Wanli ;
Xu, Dong .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1505-1514