Group channel pruning and spatial attention distilling for object detection

被引:0
|
作者
Yun Chu
Pu Li
Yong Bai
Zhuhua Hu
Yongqing Chen
Jiafeng Lu
机构
[1] Hainan University,School of Information and Communication Engineering
[2] Peking University,School of Software and Microelectronics
来源
Applied Intelligence | 2022年 / 52卷
关键词
Model compression; Object detection; Group channel pruning; Knowledge distillation;
D O I
暂无
中图分类号
学科分类号
摘要
Due to the over-parameterization of neural networks, many model compression methods based on pruning and quantization have emerged. They are remarkable in reducing the size, parameter number, and computational complexity of the model. However, most of the models compressed by such methods need the support of special hardware and software, which increases the deployment cost. Moreover, these methods are mainly used in classification tasks, and rarely directly used in detection tasks. To address these issues, for the object detection network we introduce a three-stage model compression method: dynamic sparse training, group channel pruning, and spatial attention distilling. Firstly, to select out the unimportant channels in the network and maintain a good balance between sparsity and accuracy, we put forward a dynamic sparse training method, which introduces a variable sparse rate, and the sparse rate will change with the training process of the network. Secondly, to reduce the effect of pruning on network accuracy, we propose a novel pruning method called group channel pruning. In particular, we divide the network into multiple groups according to the scales of the feature layer and the similarity of module structure in the network, and then we use different pruning thresholds to prune the channels in each group. Finally, to recover the accuracy of the pruned network, we use an improved knowledge distillation method for the pruned network. Especially, we extract spatial attention information from the feature maps of specific scales in each group as knowledge for distillation. In the experiments, we use YOLOv4 as the object detection network and PASCAL VOC as the training dataset. Our method reduces the parameters of the model by 64.7% and the calculation by 34.9%. When the input image size is 416×416, compared with the original network model with 256MB size and 87.1 accuracies, our compressed model achieves 86.6 accuracies with 90MB size. To demonstrate the generality of our method, we replace the backbone to Darknet53 and Mobilenet and also achieve satisfactory compression results.
引用
收藏
页码:16246 / 16264
页数:18
相关论文
共 50 条
  • [41] Construction site safety detection based on object detection with channel-wise attention
    Jiang, Weihong
    Qiu, Changzhen
    Li, Chunze
    Li, Dengxiang
    Chen, Weibiao
    Zhang, Zhiyong
    Wang, Luping
    Wang, Liang
    2021 THE 5TH INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, ICVIP 2021, 2021, : 85 - 91
  • [42] A Fully Convolutional Network based on Spatial Attention for Saliency Object Detection
    Chen, Kai
    Wang, Yongxiong
    Hu, Chuanfei
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 5707 - 5711
  • [43] Dual Spatial Attention Network for Underwater Object Detection With Sonar Imagery
    Li, Zikang
    Xie, Zhuojun
    Duan, Puhong
    Kang, Xudong
    Li, Shutao
    IEEE SENSORS JOURNAL, 2024, 24 (05) : 6998 - 7008
  • [44] Transparent Object Detection with Simulation Heatmap Guidance and Context Spatial Attention
    Chen, Shuo
    Li, Di
    Ju, Bobo
    Jiang, Linhua
    Zhao, Dongfang
    MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 3 - 15
  • [45] Spatial Attention for Multi-Scale Feature Refinement for Object Detection
    Wang, Haoran
    Wang, Zexin
    Jia, Meixia
    Li, Aijin
    Feng, Tuo
    Zhang, Wenhua
    Jiao, Licheng
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 64 - 72
  • [46] Event-based Object Detection with Lightweight Spatial Attention Mechanism
    Liang, Zichen
    Chen, Guang
    Li, Zhijun
    Liu, Peigen
    Knoll, Alois
    2021 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2021), 2021, : 498 - 503
  • [47] Group attention retention network for co-salient object detection
    Liu, Jing
    Wang, Jiaxiang
    Fan, Zhiwei
    Yuan, Min
    Wang, Weikang
    Yu, Jiexiao
    MACHINE VISION AND APPLICATIONS, 2023, 34 (06)
  • [48] Group attention retention network for co-salient object detection
    Jing Liu
    Jiaxiang Wang
    Zhiwei Fan
    Min Yuan
    Weikang Wang
    Jiexiao Yu
    Machine Vision and Applications, 2023, 34
  • [49] Context Matters: Distilling Knowledge Graph for Enhanced Object Detection
    Yang, Aijia
    Lin, Sihao
    Yeh, Chung-Hsing
    Shu, Minglei
    Yang, Yi
    Chang, Xiaojun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 487 - 500
  • [50] Neural network pruning based on channel attention mechanism
    Hu, Jianqiang
    Liu, Yang
    Wu, Keshou
    CONNECTION SCIENCE, 2022, 34 (01) : 2201 - 2218