Matching the Ideal Pruning Method with Knowledge Distillation for Optimal Compression

被引:2
|
作者
Malihi, Leila [1 ]
Heidemann, Gunther [1 ]
机构
[1] Osnabruck Univ, Inst Cognit Sci, Dept Comp Vis, D-49074 Osnabruck, Germany
关键词
knowledge distillation; network efficiency; parameter reduction; unstructured pruning; structured pruning;
D O I
10.3390/asi7040056
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, model compression techniques have gained significant attention as a means to reduce the computational and memory requirements of deep neural networks. Knowledge distillation and pruning are two prominent approaches in this domain, each offering unique advantages in achieving model efficiency. This paper investigates the combined effects of knowledge distillation and two pruning strategies, weight pruning and channel pruning, on enhancing compression efficiency and model performance. The study introduces a metric called "Performance Efficiency" to evaluate the impact of these pruning strategies on model compression and performance. Our research is conducted on the popular datasets CIFAR-10 and CIFAR-100. We compared diverse model architectures, including ResNet, DenseNet, EfficientNet, and MobileNet. The results emphasize the efficacy of both weight and channel pruning in achieving model compression. However, a significant distinction emerges, with weight pruning showing superior performance across all four architecture types. We realized that the weight pruning method better adapts to knowledge distillation than channel pruning. Pruned models show a significant reduction in parameters without a significant reduction in accuracy.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer
    Wu, Zhiyuan
    Jiang, Yu
    Zhao, Minghao
    Cui, Chupeng
    Yang, Zongmin
    Xue, Xinhui
    Qi, Hong
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, 2021, 12815 : 553 - 565
  • [22] Iterative filter pruning with combined feature maps and knowledge distillation
    Liu, Yajun
    Fan, Kefeng
    Zhou, Wenju
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (03) : 1955 - 1969
  • [23] Generalized Knowledge Distillation via Relationship Matching
    Ye, Han-Jia
    Lu, Su
    Zhan, De-Chuan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 1817 - 1834
  • [24] Knowledge distillation under ideal joint classifier assumption
    Li, Huayu
    Chen, Xiwen
    Ditzler, Gregory
    Roveda, Janet
    Li, Ao
    NEURAL NETWORKS, 2024, 173
  • [25] Triplet Knowledge Distillation Networks for Model Compression
    Tang, Jialiang
    Jiang, Ning
    Yu, Wenxin
    Wu, Wenqin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [26] Analysis of Model Compression Using Knowledge Distillation
    Hong, Yu-Wei
    Leu, Jenq-Shiou
    Faisal, Muhamad
    Prakosa, Setya Widyawan
    IEEE ACCESS, 2022, 10 : 85095 - 85105
  • [27] Robustness-Reinforced Knowledge Distillation With Correlation Distance and Network Pruning
    Kim, Seonghak
    Ham, Gyeongdo
    Cho, Yucheol
    Kim, Daeshik
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 9163 - 9175
  • [28] Semantic Segmentation Optimization Algorithm Based on Knowledge Distillation and Model Pruning
    Yao, Weiwei
    Zhang, Jie
    Li, Chen
    Li, Shiyun
    He, Li
    Zhang, Bo
    2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA (ICAIBD 2019), 2019, : 261 - 265
  • [29] Model Selection - Knowledge Distillation Framework for Model Compression
    Chen, Renhai
    Yuan, Shimin
    Wang, Shaobo
    Li, Zhenghan
    Xing, Meng
    Feng, Zhiyong
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [30] Lightweight detection network for bridge defects based on model pruning and knowledge distillation
    Guan, Bin
    Li, Junjie
    STRUCTURES, 2024, 62