Network Pruning for Remote Sensing Images Classification Based on Interpretable CNNs

被引:29
作者
Guo, Xianpeng [1 ]
Hou, Biao [1 ]
Ren, Bo [1 ]
Ren, Zhongle [1 ]
Jiao, Licheng [1 ]
机构
[1] Xidian Univ, Key Lab Intelligent Percept & Image Understanding, Joint Int Res Lab Intelligent Percept & Computat, Int Res Ctr Intelligent Percept & Computat,Minist, Xian 710071, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
基金
中国国家自然科学基金;
关键词
Remote sensing; Sensitivity; Feature extraction; Image coding; Semantics; Computational modeling; Computational efficiency; Convolutional neural networks (CNNs); interpretable CNNs; network pruning; remote sensing image classification; sensitivity function; SCENE CLASSIFICATION;
D O I
10.1109/TGRS.2021.3077062
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Convolutional neural network (CNN)-based research has been successfully applied in remote sensing image classification due to its powerful feature representation ability. However, these high-capacity networks bring heavy inference costs and are easily overparameterized, especially for the deep CNNs pretrained on natural image datasets. Network pruning is regarded as a prevalent approach for compressing networks, but most existing research ignores model interpretability while formulating pruning criterion. To address these issues, a network pruning method for remote sensing image classification based on interpretable CNNs is proposed. More specifically, an original interpretable CNN with a predefined pruning ratio is trained at first. The filters, namely channels in the high convolutional layer, are able to learn specific semantic meanings in proportion to the predefined pruning ratio. The filters without interpretability are supposed to be removed. As for other convolutional layers, a sensitivity function is designed to assess the risk of pruning channels for each layer, and furthermore, the pruning ratio for each layer is corrected adaptively. The pruning method based on the proposed sensitivity function is effective and requires little computational costs to search abandoned channels without damaging classification performance. To demonstrate the effectiveness, the proposed method is implemented on different scales of modern CNN models, including VGG-VD and AlexNet. The experimental results, obtained on the UC Merced dataset and NWPU-RESISC45 dataset, prove that our method significantly reduces the inference costs and improves the interpretability of networks.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [2] Network Dissection: Quantifying Interpretability of Deep Visual Representations
    Bau, David
    Zhou, Bolei
    Khosla, Aditya
    Oliva, Aude
    Torralba, Antonio
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3319 - 3327
  • [3] Bottou L., 2012, NEURAL NETWORKS TRIC, P421
  • [4] Chen Xi, 2016, ADV NEURAL INFORM PR, V29
  • [5] Remote Sensing Image Scene Classification: Benchmark and State of the Art
    Cheng, Gong
    Han, Junwei
    Lu, Xiaoqiang
    [J]. PROCEEDINGS OF THE IEEE, 2017, 105 (10) : 1865 - 1883
  • [6] Courbariaux M, 2015, ADV NEUR IN, V28
  • [7] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [8] ECO: Efficient Convolution Operators for Tracking
    Danelljan, Martin
    Bhat, Goutam
    Khan, Fahad Shahbaz
    Felsberg, Michael
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6931 - 6939
  • [9] Interpretable Explanations of Black Boxes by Meaningful Perturbation
    Fong, Ruth C.
    Vedaldi, Andrea
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3449 - 3457
  • [10] European Union Regulations on Algorithmic Decision Making and a "Right to Explanation"
    Goodman, Bryce
    Flaxman, Seth
    [J]. AI MAGAZINE, 2017, 38 (03) : 50 - 57