Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

被引:0
作者
Dong, Xin [1 ]
Chen, Shangyu [1 ]
Pan, Sinno Jialin [1 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017) | 2017年 / 30卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How to develop slim and accurate deep neural networks has become crucial for real-world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Divide and Slide: Layer-Wise Refinement for Output Range Analysis of Deep Neural Networks
    Huang, Chao
    Fan, Jiameng
    Chen, Xin
    Li, Wenchao
    Zhu, Qi
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 3323 - 3335
  • [22] Dynamic layer-wise sparsification for distributed deep learning
    Zhang, Hao
    Wu, Tingting
    Ma, Zhifeng
    Li, Feng
    Liu, Jie
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 147 : 1 - 15
  • [23] Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
    Wu, Tao
    Li, Xiaoyang
    Zhou, Deyun
    Li, Na
    Shi, Jiao
    SENSORS, 2021, 21 (03) : 1 - 20
  • [24] Differential evolution based layer-wise weight pruning for compressing deep neural networks
    Wu, Tao
    Li, Xiaoyang
    Zhou, Deyun
    Li, Na
    Shi, Jiao
    Sensors (Switzerland), 2021, 21 (03): : 1 - 20
  • [25] Exploiting potential of deep neural networks by layer-wise fine-grained parallelism
    Jiang, Wenbin
    Zhang, Yangsong
    Liu, Pai
    Peng, Jing
    Yang, Laurence T.
    Ye, Geyan
    Jin, Hai
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 102 : 210 - 221
  • [26] Post-training deep neural network pruning via layer-wise calibration
    Lazarevich, Ivan
    Kozlov, Alexander
    Malinin, Nikita
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 798 - 805
  • [27] Potential Layer-Wise Supervised Learning for Training Multi-Layered Neural Networks
    Kamimura, Ryotaro
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2568 - 2575
  • [28] Forward layer-wise learning of convolutional neural networks through separation index maximizing
    Karimi, Ali
    Kalhor, Ahmad
    Tabrizi, Melika Sadeghi
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [29] Layer-Wise Training to Create Efficient Convolutional Neural Networks
    Zeng, Linghua
    Tian, Xinmei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 631 - 641
  • [30] Stochastic Neural Networks with Layer-Wise Adjustable Sequence Length
    Wang, Ziheng
    Reviriego, Pedro
    Niknia, Farzad
    Liu, Shanshan
    Gao, Zhen
    Lombardi, Fabrizio
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON NANOTECHNOLOGY, NANO 2024, 2024, : 436 - 441