FALF ConvNets: Fatuous auxiliary loss based filter-pruning for efficient deep CNNs

被引:14
|
作者
Singh, Pravendra [1 ]
Kadi, Vinay Sameer Raja [2 ]
Namboodiri, Vinay P. [1 ]
机构
[1] Indian Inst Technol Kanpur, Dept Comp Sci & Engn, Kanpur, Uttar Pradesh, India
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
Filter pruning; Model compression; Convolutional neural network; Image recognition; Deep learning;
D O I
10.1016/j.imavis.2019.103857
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Obtaining efficient Convolutional Neural Networks (CNNs) are imperative to enable their application for a wide variety of tasks (classification, detection, etc.). While several methods have been proposed to solve this problem, we propose a novel strategy for solving the same that is orthogonal to the strategies proposed so far. We hypothesize that if we add a fatuous auxiliary task, to a network which aims to solve a semantic task such as classification or detection, the filters devoted to solving this frivolous task would not be relevant for solving the main task of concern. These filters could be pruned and pruning these would not reduce the performance on the original task. We demonstrate that this strategy is not only successful, it in fact allows for improved performance for a variety of tasks such as object classification, detection and action recognition. An interesting observation is that the task needs to be fatuous so that any semantically meaningful filters would not be relevant for solving this task. We thoroughly evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, Faster RCNN, SSD-512, C3D, and MobileNet V2) and datasets (MNIST, CIFAR, ImageNet, GTSDB, COCO, and UCF101) and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X for VGG-16, significantly outperforming other state-of-the-art filter pruning methods. We demonstrate the usability of our approach for 3D convolutions and various vision tasks such as object classification, object detection, and action recognition. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 33 条
  • [21] AutoPruner: An end-to-end trainable filter pruning method for efficient deep model inference
    Luo, Jian-Hao
    Wu, Jianxin
    PATTERN RECOGNITION, 2020, 107
  • [22] Compression of Deep Convolutional Neural Network Using Additional Importance-Weight-Based Filter Pruning Approach
    Sawant, Shrutika S.
    Wiedmann, Marco
    Goeb, Stephan
    Holzer, Nina
    Lang, Elmar W.
    Goetz, Theresa
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [23] Batch-Normalization-based Soft Filter Pruning for Deep Convolutional Neural Networks
    Xu, Xiaozhou
    Chen, Qiming
    Xie, Lei
    Su, Hongye
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 951 - 956
  • [24] Magnitude and Similarity Based Variable Rate Filter Pruning for Efficient Convolution Neural Networks
    Ghimire, Deepak
    Kim, Seong-Heum
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [25] A Novel Filter-Level Deep Convolutional Neural Network Pruning Method Based on Deep Reinforcement Learning
    Feng, Yihao
    Huang, Chao
    Wang, Long
    Luo, Xiong
    Li, Qingwen
    APPLIED SCIENCES-BASEL, 2022, 12 (22):
  • [26] D-Pruner: Filter-Based Pruning Method for Deep Convolutional Neural Network
    Huynh, Loc N.
    Lee, Youngki
    Balan, Rajesh Krishna
    PROCEEDINGS OF THE 2018 INTERNATIONAL WORKSHOP ON EMBEDDED AND MOBILE DEEP LEARNING (EMDL '18), 2018, : 7 - 12
  • [27] Filter Pruning via Probabilistic Model-based Optimization for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 653 - 661
  • [28] EFFICIENT DEEP LEARNING-BASED LOSSY IMAGE COMPRESSION VIA ASYMMETRIC AUTOENCODER AND PRUNING
    Kim, Jun-Hyuk
    Choi, Jun-Ho
    Chang, Jaehyuk
    Lee, Jong-Seok
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 2063 - 2067
  • [29] A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application
    Zhao, Ming
    Tong, Xindi
    Wu, Weixian
    Wang, Zhen
    Zhou, Bingxue
    Huang, Xiaodan
    SENSORS, 2022, 22 (15)
  • [30] An Efficient Scheme for Determining the Power Loss in Wind-PV Based on Deep Learning
    Mihet-Popa, Lucian
    Aboelsaud, Raef
    Khurshaid, Tahir
    Hassan, Abdurrahman Shuaibu
    Tahir, Muhammad Faizan
    Rawa, Muhyaddin
    Alkhalaf, Salem
    Ali, Ziad M.
    IEEE ACCESS, 2021, 9 : 9481 - 9492