Adaptive Scaling Filter Pruning Method for Vision Networks With Embedded Devices

被引:0
作者
Ko, Hyunjun [1 ]
Kang, Jin-Ku [1 ]
Kim, Yongwoo [2 ]
机构
[1] Inha Univ, Dept Elect & Comp Engn, Incheon 22212, South Korea
[2] Korea Natl Univ Educ, Dept Technol Educ, Cheongju 28173, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
基金
新加坡国家研究基金会;
关键词
Information filters; Adaptive systems; Adaptive filters; Training; Filtering algorithms; Quantization (signal); Batch normalization; Computer vision; Convolutional neural networks; Deep learning; convolutional neural network; inference time; network compression; pruning;
D O I
10.1109/ACCESS.2024.3454329
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Owing to improvements in computing power, deep learning technology using convolutional neural networks (CNNs) has recently been used in various fields. However, using CNNs on edge devices is challenging because of the large computation required to achieve high performance. To solve this problem, pruning, which reduces redundant parameters and computations, has been widely studied. However, a conventional pruning method requires two learning processes, which are time-consuming and resource-intensive, and it is difficult to reflect the redundancy in the pruned network because it only performs pruning once on the unpruned network. Therefore, in this paper, we utilize a single learning process and propose an adaptive scaling method that dynamically adjusts the size of the network to reflect the changing redundancy in the pruned network. To verify the performance of each method, we compare the performance of the proposed methods by conducting experiments on various datasets and networks. In our experiments using the ImageNet dataset on ResNet-50, pruning FLOPs by 50.1% and 74.0% resulted in a decrease in top-1 accuracy by 0.92% and 3.38%, respectively, and improved inference time by 26.4% and 58.9%, respectively. In addition, pruning FLOPs by 27.37%, 36.84% and 46.41% using the COCO dataset on YOLOv7, reduced mAP(0.5-0.95) by 1.2%, 2.2% and 2.9%, respectively, and improved inference time by 12.9%, 16.9% and19.3%.
引用
收藏
页码:123771 / 123781
页数:11
相关论文
共 40 条
[1]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[2]  
Chen Tianyi, 2021, Advances in Neural Information Processing Systems, V34
[3]   Deep Learning-based Estimation for Multitarget Radar Detection [J].
Delamou, Mamady ;
Bazzi, Ahmad ;
Chafii, Marwa ;
Amhoud, El Mehdi .
2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]   ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting [J].
Ding, Xiaohan ;
Hao, Tianxiang ;
Tan, Jianchao ;
Liu, Ji ;
Han, Jungong ;
Guo, Yuchen ;
Ding, Guiguang .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :4490-4500
[6]   The PASCAL Visual Object Classes Challenge: A Retrospective [J].
Everingham, Mark ;
Eslami, S. M. Ali ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (01) :98-136
[7]   DepGraph: Towards Any Structural Pruning [J].
Fang, Gongfan ;
Ma, Xinyin ;
Song, Mingli ;
Mi, Michael Bi ;
Wang, Xinchao .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :16091-16101
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
He Y., 2018, arXiv
[10]   CHEX: CHannel EXploration for CNN Model Compression [J].
Hou, Zejiang ;
Qin, Minghai ;
Sun, Fei ;
Ma, Xiaolong ;
Yuan, Kun ;
Xu, Yi ;
Chen, Yen-Kuang ;
Jin, Rong ;
Xie, Yuan ;
Kung, Sun-Yuan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :12277-12288