A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks

被引:0
作者
Wang, Dong [1 ]
Bai, Xiao [1 ]
Zhou, Lei [1 ]
Zhou, Jun [2 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing Adv Innovat Ctr Big Data & Brain Comp, Jiangxi Res Inst, Beijing, Peoples R China
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld, Australia
来源
2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019) | 2019年
基金
中国国家自然科学基金;
关键词
filter pruning; network pruning; cnn acceleration;
D O I
10.1109/ICTAI.2019.00111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Acceleration of convolutional neural network has received increasing attention during the past several years. Among various acceleration techniques, filter pruning has its inherent merit by effectively reducing the number of convolution filters. However, most filter pruning methods resort to tedious and time-consuming layer-by-layer pruning-recovery strategy to avoid a significant drop of accuracy. In this paper, we present an efficient filter pruning framework to solve this problem. Our method accelerates the network in one-step pruning-recovery manner with a novel optimization objective function, which achieves higher accuracy with much less cost compared with existing pruning methods. Furthermore, our method allows network compression with global filter pruning. Given a global pruning rate, it can adaptively determine the pruning rate for each single convolutional layer, while these rates are often set as hyper-parameters in previous approaches. Evaluated on VGG-16 and RcsNct-50 using ImageNet, our approach outperforms several state-of-the-art methods with less accuracy drop under the same and even much fewer floating-point operations (FLOPs).
引用
收藏
页码:768 / 775
页数:8
相关论文
共 45 条
  • [41] FP-AGL: Filter Pruning With Adaptive Gradient Learning for Accelerating Deep Convolutional Neural Networks
    Kim, Nam Joon
    Kim, Hyun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5279 - 5290
  • [42] A Novel Attention-Based Layer Pruning Approach for Low-Complexity Convolutional Neural Networks
    Hossain, Md. Bipul
    Gong, Na
    Shaban, Mohamed
    ADVANCED INTELLIGENT SYSTEMS, 2024, 6 (11)
  • [43] Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network
    Wu, Tao
    Shi, Jiao
    Zhou, Deyun
    Zheng, Xiaolong
    Li, Na
    SENSORS, 2021, 21 (17)
  • [44] SAAF: Self-Adaptive Attention Factor-Based Taylor-Pruning on Convolutional Neural Networks
    Lu, Yiheng
    Gong, Maoguo
    Feng, Kaiyuan
    Liu, Jialu
    Guan, Ziyu
    Li, Hao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [45] A Group Regularization Framework of Convolutional Neural Networks Based on the Impact of Lp Regularizers on Magnitude
    Li, Feng
    Hu, Yaokai
    Zhang, Huisheng
    Deng, Ansheng
    Zurada, Jacek M.
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (12): : 7434 - 7444