Filter pruning by quantifying feature similarity and entropy of feature maps

被引:12
|
作者
Liu, Yajun [1 ]
Fan, Kefeng [2 ]
Wu, Dakui [1 ]
Zhou, Wenju [1 ]
机构
[1] Shanghai Univ, Sch Mechatron Engn & Automat, Shanghai 200444, Peoples R China
[2] China Elect Standardizat Inst, Beijing 100007, Peoples R China
关键词
Filter pruning; Feature similarity (FSIM); Two-dimensional entropy (2D entropy); Feature maps; MODEL;
D O I
10.1016/j.neucom.2023.126297
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning can effectively reduce the time cost and computing resources of convolutional neural net-works (CNNs), and is well applied to lightweight edge devices. However, most of the current pruning methods focus on the inherent properties of the filters themselves to prune the network, and pay less attention to the connection between the filters and the feature maps. Feature similarity (FSIM) utilizes the fact that the human visual system is more sensitive to the underlying features of the images to more accurately assess image quality. We discover that FSIM is also suitable for evaluating feature maps of CNNs. In addition, the information richness in the feature maps reflects the degree of importance of the filters. Based on the above research, we propose to quantify the importance of feature maps with FSIM and two-dimensional entropy (2D Entropy) indicator to further guide filter pruning (FSIM-E). The FSIM-E is executed on CIFAR-10 and ILSVRC-2012 to demonstrate that FSIM-E can effectively compress and accelerate the network model. For example, for ResNet-110 on CIFAR-10, FSIM-E prunes 71.1% of the FLOPs and 66.5% of the parameters, while improving the accuracy by 0.1%. With ResNet-50, FSIM-E can achieve 57.2% pruning rate of FLOPs and 53.1% pruning rate of parameters on ILSVRC-2012 with loss of only 0.42% of Top-5 accuracy. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] A unifying view of explicit and implicit feature maps of graph kernels
    Nils M. Kriege
    Marion Neumann
    Christopher Morris
    Kristian Kersting
    Petra Mutzel
    Data Mining and Knowledge Discovery, 2019, 33 : 1505 - 1547
  • [42] Applications of Explicit Non-Linear Feature Maps in Steganalysis
    Boroumand, Mehdi
    Fridrich, Jessica
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (04) : 823 - 833
  • [43] A unifying view of explicit and implicit feature maps of graph kernels
    Kriege, Nils M.
    Neumann, Marion
    Morris, Christopher
    Kersting, Kristian
    Mutzel, Petra
    DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 33 (06) : 1505 - 1547
  • [44] Negotiating the semantic gap: from feature maps to semantic landscapes
    Zhao, R
    Grosky, WI
    PATTERN RECOGNITION, 2002, 35 (03) : 593 - 600
  • [45] Space-Efficient Feature Maps for String Alignment Kernels
    Tabei, Yasuo
    Yamanishi, Yoshihiro
    Pagh, Rasmus
    DATA SCIENCE AND ENGINEERING, 2020, 5 (02) : 168 - 179
  • [46] Space-efficient Feature Maps for String Alignment Kernels
    Tabei, Yasuo
    Yamanishi, Yoshihiro
    Pagh, Rasmus
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 1312 - 1317
  • [47] Convolutional Neural Network Simplification based on Feature Maps Selection
    Rui, Ting
    Zou, Junhua
    Zhou, You
    Fei, Jianchao
    Yang, Chengsong
    2016 IEEE 22ND INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2016, : 1207 - 1210
  • [48] Space-Efficient Feature Maps for String Alignment Kernels
    Yasuo Tabei
    Yoshihiro Yamanishi
    Rasmus Pagh
    Data Science and Engineering, 2020, 5 : 168 - 179
  • [49] Understanding Adversarial Robustness From Feature Maps of Convolutional Layers
    Xu, Cong
    Zhang, Wei
    Wang, Jun
    Yang, Min
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [50] A healing mechanism to improve the topological preserving property of feature maps
    Su, MC
    Chou, CH
    Chang, HT
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2002, E85D (04) : 735 - 743