CHANNEL PRUNING VIA GRADIENT OF MUTUAL INFORMATION FOR LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORKS

被引:0
|
作者
Lee, Min Kyu [1 ]
Lee, Seunghyun [1 ]
Lee, Sang Hyuk [1 ]
Song, Byung Cheol [1 ]
机构
[1] Inha Univ, Dept Elect Engn, Incheon, South Korea
来源
2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2020年
关键词
convolutional neural network; pruning; model compression; mutual information;
D O I
暂无
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Channel pruning for light-weighting networks is very effective in reducing memory footprint and computational cost. Many channel pruning methods assume that the magnitude of a particular element corresponding to each channel reflects the importance of the channel. Unfortunately, such an assumption does not always hold. To solve this problem, this paper proposes a new method to measure the importance of channels based on gradients of mutual information. The proposed method computes and measures gradients of mutual information during back-propagation by arranging a module capable of estimating mutual information. By using the measured statistics as the importance of the channel, less important channels can be removed. Finally, the fine-tuning enables robust performance restoration of the pruned model. Experimental results show that the proposed method provides better performance with smaller parameter sizes and FLOPs than the conventional schemes.
引用
收藏
页码:1751 / 1755
页数:5
相关论文
共 50 条
  • [31] Review of research on lightweight convolutional neural networks
    Zhou, Yan
    Chen, Shaochang
    Wang, Yiming
    Huan, Wenming
    PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 1713 - 1720
  • [32] Selective Pruning of Sparsity-Supported Energy-Efficient Accelerator for Convolutional Neural Networks
    Liu, Chia-Chi
    Zhang, Xuezhi
    Wey, I-Chyn
    Teo, T. Hui
    2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 454 - 461
  • [33] Fpar: filter pruning via attention and rank enhancement for deep convolutional neural networks acceleration
    Chen, Yanming
    Wu, Gang
    Shuai, Mingrui
    Lou, Shubin
    Zhang, Yiwen
    An, Zhulin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (07) : 2973 - 2985
  • [34] CCPrune: Collaborative channel pruning for learning compact convolutional networks
    Chen, Yanming
    Wen, Xiang
    Zhang, Yiwen
    Shi, Weisong
    NEUROCOMPUTING, 2021, 451 : 35 - 45
  • [35] Global balanced iterative pruning for efficient convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (23) : 21119 - 21138
  • [36] Dynamic connection pruning for densely connected convolutional neural networks
    Hu, Xinyi
    Fang, Hangxiang
    Zhang, Ling
    Zhang, Xue
    Yang, Howard H.
    Yang, Dongxiao
    Peng, Bo
    Li, Zheyang
    Hu, Haoji
    APPLIED INTELLIGENCE, 2023, 53 (16) : 19505 - 19521
  • [37] REAP: A Method for Pruning Convolutional Neural Networks with Performance Preservation
    Kamma, Koji
    Wada, Toshikazu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (01) : 194 - 202
  • [38] Global balanced iterative pruning for efficient convolutional neural networks
    Jingfei Chang
    Yang Lu
    Ping Xue
    Yiqun Xu
    Zhen Wei
    Neural Computing and Applications, 2022, 34 : 21119 - 21138
  • [39] Automatic Compression Ratio Allocation for Pruning Convolutional Neural Networks
    Liu, Yunfeng
    Kong, Huihui
    Yu, Peihua
    ICVISP 2019: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON VISION, IMAGE AND SIGNAL PROCESSING, 2019,
  • [40] Soft Taylor Pruning for Accelerating Deep Convolutional Neural Networks
    Rong, Jintao
    Yu, Xiyi
    Zhang, Mingyang
    Ou, Linlin
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 5343 - 5349