Adaptive Scaling Filter Pruning Method for Vision Networks With Embedded Devices

被引:0
作者
Ko, Hyunjun [1 ]
Kang, Jin-Ku [1 ]
Kim, Yongwoo [2 ]
机构
[1] Inha Univ, Dept Elect & Comp Engn, Incheon 22212, South Korea
[2] Korea Natl Univ Educ, Dept Technol Educ, Cheongju 28173, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
基金
新加坡国家研究基金会;
关键词
Information filters; Adaptive systems; Adaptive filters; Training; Filtering algorithms; Quantization (signal); Batch normalization; Computer vision; Convolutional neural networks; Deep learning; convolutional neural network; inference time; network compression; pruning;
D O I
10.1109/ACCESS.2024.3454329
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Owing to improvements in computing power, deep learning technology using convolutional neural networks (CNNs) has recently been used in various fields. However, using CNNs on edge devices is challenging because of the large computation required to achieve high performance. To solve this problem, pruning, which reduces redundant parameters and computations, has been widely studied. However, a conventional pruning method requires two learning processes, which are time-consuming and resource-intensive, and it is difficult to reflect the redundancy in the pruned network because it only performs pruning once on the unpruned network. Therefore, in this paper, we utilize a single learning process and propose an adaptive scaling method that dynamically adjusts the size of the network to reflect the changing redundancy in the pruned network. To verify the performance of each method, we compare the performance of the proposed methods by conducting experiments on various datasets and networks. In our experiments using the ImageNet dataset on ResNet-50, pruning FLOPs by 50.1% and 74.0% resulted in a decrease in top-1 accuracy by 0.92% and 3.38%, respectively, and improved inference time by 26.4% and 58.9%, respectively. In addition, pruning FLOPs by 27.37%, 36.84% and 46.41% using the COCO dataset on YOLOv7, reduced mAP(0.5-0.95) by 1.2%, 2.2% and 2.9%, respectively, and improved inference time by 12.9%, 16.9% and19.3%.
引用
收藏
页码:123771 / 123781
页数:11
相关论文
共 40 条
[31]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[32]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[33]  
Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, 10.48550/arXiv.1409.1556]
[34]  
Sui Yang, 2021, ADV NEURAL INFORM PR, V34
[35]  
Szegedy C, 2015, PROC CVPR IEEE, P1, DOI 10.1109/CVPR.2015.7298594
[36]   YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [J].
Wang, Chien-Yao ;
Bochkovskiy, Alexey ;
Liao, Hong-Yuan Mark .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :7464-7475
[37]  
Wang H, 2022, Arxiv, DOI arXiv:2207.12534
[38]  
Xu XF, 2018, Arxiv, DOI arXiv:1811.00482
[39]   NISP: Pruning Networks using Neuron Importance Score Propagation [J].
Yu, Ruichi ;
Li, Ang ;
Chen, Chun-Fu ;
Lai, Jui-Hsin ;
Morariu, Vlad I. ;
Han, Xintong ;
Gao, Mingfei ;
Lin, Ching-Yung ;
Davis, Larry S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9194-9203
[40]  
zejiangh, 2022, Filter-Gap