Class-dependent Pruning of Deep Neural Networks

被引:3
作者
Entezari, Rahim [1 ]
Saukh, Olga [1 ]
机构
[1] Graz Univ Technol, Inst Tech Informat, CSH Vienna, Graz, Austria
来源
2020 IEEE SECOND WORKSHOP ON MACHINE LEARNING ON EDGE IN SENSOR SYSTEMS (SENSYS-ML 2020) | 2020年
关键词
deep neural network compression; pruning; lottery ticket hypothesis; data imbalance; class imbalance;
D O I
10.1109/SenSysML50931.2020.00010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today's deep neural networks require substantial computation resources for their training, storage and inference, which limits their effective use on resource-constrained devices. Many recent research activities explore different options for compressing and optimizing deep models. On the one hand, in many real-world applications we face the data imbalance challenge, i.e., when the number of labeled instances of one class considerably outweighs the number of labeled instances of the other class. On the other hand, applications may pose a class imbalance problem, i.e., higher number of false positives produced when training a model and optimizing its performance may be tolerable, yet the number of false negatives must stay low. The problem originates from the fact that some classes are more important for the application than others, e.g., detection problems in medical and surveillance domains. Motivated by the success of the lottery ticket hypothesis, in this paper we propose an iterative deep model compression technique, which keeps the number of false negatives of the compressed model close to the one of the original model at the price of increasing the number of false positives if necessary. Our experimental evaluation using two benchmark data sets shows that the resulting compressed sub-networks 1) achieve up to 35% lower number of false negatives than the compressed model without class optimization, 2) provide an overall higher AUC-ROC measure compared to conventional Lottery Ticket algorithm and three recent popular pruning methods, and 3) use up to 99% fewer parameters compared to the original network. The code is publicly available(1).
引用
收藏
页码:13 / 18
页数:6
相关论文
共 50 条
  • [21] Pruning by explaining: A novel criterion for deep neural network pruning
    Yeom, Seul-Ki
    Seegerer, Philipp
    Lapuschkin, Sebastian
    Binder, Alexander
    Wiedemann, Simon
    Mueller, Klaus-Robert
    Samek, Wojciech
    PATTERN RECOGNITION, 2021, 115
  • [22] GROWING AND PRUNING NEURAL TREE NETWORKS
    SANKAR, A
    MAMMONE, RJ
    IEEE TRANSACTIONS ON COMPUTERS, 1993, 42 (03) : 291 - 299
  • [23] Multiobjective evolutionary pruning of Deep Neural Networks with Transfer Learning for improving their performance and robustness
    Poyatos, Javier
    Molina, Daniel
    Martinez-Seras, Aitor
    Del Ser, Javier
    Herrera, Francisco
    APPLIED SOFT COMPUTING, 2023, 147
  • [24] Multi-objective pruning of dense neural networks using deep reinforcement learning
    Hirsch, Lior
    Katz, Gilad
    INFORMATION SCIENCES, 2022, 610 : 381 - 400
  • [25] Resource-Aware Saliency-Guided Differentiable Pruning for Deep Neural Networks
    Kallakuri, Uttej
    Humes, Edward
    Mohsenin, Tinoosh
    PROCEEDING OF THE GREAT LAKES SYMPOSIUM ON VLSI 2024, GLSVLSI 2024, 2024, : 694 - 699
  • [26] A NOVEL LAYERWISE PRUNING METHOD FOR MODEL REDUCTION OF FULLY CONNECTED DEEP NEURAL NETWORKS
    Mauch, Lukas
    Yang, Bin
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2382 - 2386
  • [27] Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks
    Shahhosseini, Sina
    Albaqsami, Ahmad
    Jasemi, Masoomeh
    Bagherzadeh, Nader
    2020 28TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP 2020), 2020, : 307 - 311
  • [28] MIXP: Efficient Deep Neural Networks Pruning for Further FLOPs Compression via Neuron Bond
    Hu, Bin
    Zhao, Tianming
    Xie, Yucheng
    Wang, Yan
    Guo, Xiaonan
    Cheng, Jerry
    Chen, Yingying
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [29] Flexible Group-Level Pruning of Deep Neural Networks for On-Device Machine Learning
    Lee, Kwangbae
    Kim, Hoseung
    Lee, Hayun
    Shin, Dongkun
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 79 - 84
  • [30] Structured pruning of neural networks for constraints learning
    Cacciola, Matteo
    Frangioni, Antonio
    Lodi, Andrea
    OPERATIONS RESEARCH LETTERS, 2024, 57