Filter Pruning by Switching to Neighboring CNNs With Good Attributes

被引:43
|
作者
He, Yang [1 ,2 ]
Liu, Ping [1 ,2 ]
Zhu, Linchao [1 ]
Yang, Yi [3 ]
机构
[1] Univ Technol Sydney, Australian Artificial Intelligence Inst, ReLER Lab, Sydney, NSW 2007, Australia
[2] ASTAR, Ctr Frontier AI Res CFAR, Singapore 138632, Singapore
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310000, Peoples R China
基金
澳大利亚研究理事会;
关键词
Neural networks; Training; Training data; Neurons; Libraries; Graphics processing units; Electronic mail; Filter pruning; meta-attributes; network compression; neural networks; CLASSIFICATION; ACCURACY;
D O I
10.1109/TNNLS.2022.3149332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning is effective to reduce the computational costs of neural networks. Existing methods show that updating the previous pruned filter would enable large model capacity and achieve better performance. However, during the iterative pruning process, even if the network weights are updated to new values, the pruning criterion remains the same. In addition, when evaluating the filter importance, only the magnitude information of the filters is considered. However, in neural networks, filters do not work individually, but they would affect other filters. As a result, the magnitude information of each filter, which merely reflects the information of an individual filter itself, is not enough to judge the filter importance. To solve the above problems, we propose meta-attribute-based filter pruning (MFP). First, to expand the existing magnitude information-based pruning criteria, we introduce a new set of criteria to consider the geometric distance of filters. Additionally, to explicitly assess the current state of the network, we adaptively select the most suitable criteria for pruning via a meta-attribute, a property of the neural network at the current state. Experiments on two image classification benchmarks validate our method. For ResNet-50 on ILSVRC-2012, we could reduce more than 50% FLOPs with only 0.44% top-5 accuracy loss.
引用
收藏
页码:8044 / 8056
页数:13
相关论文
共 50 条
  • [31] Evolving filter criteria for randomly initialized network pruning in image classification
    Chen, Xiangru
    Liu, Chenjing
    Hu, Peng
    Lin, Jie
    Gong, Yunhong
    Chen, Yingke
    Peng, Dezhong
    Geng, Xue
    NEUROCOMPUTING, 2024, 594
  • [32] A-pruning: a lightweight pineapple flower counting network based on filter pruning
    Yu, Guoyan
    Cai, Ruilin
    Luo, Yingtong
    Hou, Mingxin
    Deng, Ruoling
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (02) : 2047 - 2066
  • [33] Boosting Lightweight CNNs Through Network Pruning and Knowledge Distillation for SAR Target Recognition
    Wang, Zhen
    Du, Lan
    Li, Yi
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 8386 - 8397
  • [34] Filter Pruning Based on Information Capacity and Independence
    Tang, Xiaolong
    Ye, Shuo
    Shi, Yufeng
    Hu, Tianheng
    Peng, Qinmu
    You, Xinge
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [35] ONLINE FILTER CLUSTERING AND PRUNING FOR EFFICIENT CONVNETS
    Zhou, Zhengguang
    Zhou, Wengang
    Hong, Richang
    Li, Houqiang
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 11 - 15
  • [36] Filter Pruning Without Damaging Networks Capacity
    Zuo, Yuding
    Chen, Bo
    Shi, Te
    Sun, Mengfan
    IEEE ACCESS, 2020, 8 (90924-90930) : 90924 - 90930
  • [37] Filter pruning via feature map clustering
    Li, Wei
    He, Yongxing
    Zhang, Xiaoyu
    Tang, Yongchuan
    INTELLIGENT DATA ANALYSIS, 2023, 27 (04) : 911 - 933
  • [38] Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning
    Lin, Shaohui
    Ji, Rongrong
    Li, Yuchao
    Deng, Cheng
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (02) : 574 - 588
  • [39] A Dual Rank-Constrained Filter Pruning Approach for Convolutional Neural Networks
    Fan, Fugui
    Su, Yuting
    Jing, Peiguang
    Lu, Wei
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 (28) : 1734 - 1738
  • [40] Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
    Hoang Pham
    The-Anh Ta
    Liu, Shiwei
    Xiang, Lichuan
    Le, Dung D.
    Wen, Hongkai
    Long Tran-Thanh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,