Pruning by explaining: A novel criterion for deep neural network pruning

被引:132
作者
Yeom, Seul-Ki [1 ,9 ]
Seegerer, Philipp [1 ,8 ]
Lapuschkin, Sebastian [3 ]
Binder, Alexander [4 ,5 ]
Wiedemann, Simon [3 ]
Mueller, Klaus-Robert [1 ,2 ,6 ,7 ]
Samek, Wojciech [2 ,3 ]
机构
[1] Tech Univ Berlin, Machine Learning Grp, D-10587 Berlin, Germany
[2] BIFOLD Berlin Inst Fdn Learning & Data, Berlin, Germany
[3] Fraunhofer Heinrich Hertz Inst, Dept Artificial Intelligence, D-10587 Berlin, Germany
[4] Singapore Univ Technol & Design, ISTD Pillar, Singapore 487372, Singapore
[5] Univ Oslo, Dept Informat, N-0373 Oslo, Norway
[6] Korea Univ, Dept Artificial Intelligence, Seoul 136713, South Korea
[7] Max Planck Inst Informat, D-66123 Saarbrucken, Germany
[8] Aignost GmbH, D-10557 Berlin, Germany
[9] Nota AI GmbH, D-10117 Berlin, Germany
关键词
Pruning; Layer-wise relevance propagation (LRP); Convolutional neural network (CNN); Interpretation of models; COMPRESSION;
D O I
10.1016/j.patcog.2021.107899
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sig-nificant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
引用
收藏
页数:14
相关论文
共 45 条
  • [1] Alber M, 2019, J MACH LEARN RES, V20
  • [2] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [3] Model Compression and Acceleration for Deep Neural Networks The principles, progress, and challenges
    Cheng, Yu
    Wang, Duo
    Zhou, Pan
    Zhang, Tao
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 126 - 136
  • [4] NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
    Dai, Xiaoliang
    Yin, Hongxu
    Jha, Niraj K.
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2019, 68 (10) : 1487 - 1497
  • [5] Denil M., 2013, P ADV NEUR INF PROC, P2148
  • [6] Elson J, 2007, CCS'07: PROCEEDINGS OF THE 14TH ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, P366
  • [7] Explainable AI, 2019, LECT NOTES COMPUTER, V11700
  • [8] Compressing the CNN architecture for in-air handwritten Chinese character recognition
    Gan, Ji
    Wang, Weiqiang
    Lu, Ke
    [J]. PATTERN RECOGNITION LETTERS, 2020, 129 : 190 - 197
  • [9] Recent advances in convolutional neural networks
    Gu, Jiuxiang
    Wang, Zhenhua
    Kuen, Jason
    Ma, Lianyang
    Shahroudy, Amir
    Shuai, Bing
    Liu, Ting
    Wang, Xingxing
    Wang, Gang
    Cai, Jianfei
    Chen, Tsuhan
    [J]. PATTERN RECOGNITION, 2018, 77 : 354 - 377
  • [10] Guillemot M., 2020, ABS200211018 CORR