FALF ConvNets: Fatuous auxiliary loss based filter-pruning for efficient deep CNNs

被引:14
|
作者
Singh, Pravendra [1 ]
Kadi, Vinay Sameer Raja [2 ]
Namboodiri, Vinay P. [1 ]
机构
[1] Indian Inst Technol Kanpur, Dept Comp Sci & Engn, Kanpur, Uttar Pradesh, India
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
Filter pruning; Model compression; Convolutional neural network; Image recognition; Deep learning;
D O I
10.1016/j.imavis.2019.103857
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Obtaining efficient Convolutional Neural Networks (CNNs) are imperative to enable their application for a wide variety of tasks (classification, detection, etc.). While several methods have been proposed to solve this problem, we propose a novel strategy for solving the same that is orthogonal to the strategies proposed so far. We hypothesize that if we add a fatuous auxiliary task, to a network which aims to solve a semantic task such as classification or detection, the filters devoted to solving this frivolous task would not be relevant for solving the main task of concern. These filters could be pruned and pruning these would not reduce the performance on the original task. We demonstrate that this strategy is not only successful, it in fact allows for improved performance for a variety of tasks such as object classification, detection and action recognition. An interesting observation is that the task needs to be fatuous so that any semantically meaningful filters would not be relevant for solving this task. We thoroughly evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, Faster RCNN, SSD-512, C3D, and MobileNet V2) and datasets (MNIST, CIFAR, ImageNet, GTSDB, COCO, and UCF101) and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X for VGG-16, significantly outperforming other state-of-the-art filter pruning methods. We demonstrate the usability of our approach for 3D convolutions and various vision tasks such as object classification, object detection, and action recognition. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 33 条
  • [1] ONLINE FILTER CLUSTERING AND PRUNING FOR EFFICIENT CONVNETS
    Zhou, Zhengguang
    Zhou, Wengang
    Hong, Richang
    Li, Houqiang
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 11 - 15
  • [2] Adaptive Dynamic Filter Pruning Approach Based on Deep Learning
    Chu Jinghui
    Li Meng
    Lu Wei
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (24)
  • [3] Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN
    Hao, Tianxiang
    Ding, Xiaohan
    Han, Jungong
    Guo, Yuchen
    Ding, Guiguang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16831 - 16844
  • [4] COMPRESSING AUDIO CNNS WITH GRAPH CENTRALITY BASED FILTER PRUNING
    King, James A.
    Singh, Arshdeep
    Plumbley, Mark D.
    2023 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS, WASPAA, 2023,
  • [5] An optimal-score-based filter pruning for deep convolutional neural networks
    Sawant, Shrutika S.
    Bauer, J.
    Erick, F. X.
    Ingaleshwar, Subodh
    Holzer, N.
    Ramming, A.
    Lang, E. W.
    Goetz, Th
    APPLIED INTELLIGENCE, 2022, 52 (15) : 17557 - 17579
  • [6] An optimal-score-based filter pruning for deep convolutional neural networks
    Shrutika S. Sawant
    J. Bauer
    F. X. Erick
    Subodh Ingaleshwar
    N. Holzer
    A. Ramming
    E. W. Lang
    Th. Götz
    Applied Intelligence, 2022, 52 : 17557 - 17579
  • [7] A Novel Clustering-Based Filter Pruning Method for Efficient Deep Neural Networks
    Wei, Xiaohui
    Shen, Xiaoxian
    Zhou, Changbao
    Yue, Hengshan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 245 - 258
  • [8] Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks
    Woo, Yunhee
    Kim, Dongyoung
    Jeong, Jaemin
    Ko, Young-Woong
    Lee, Jeong-Gun
    ELECTRONICS, 2021, 10 (11)
  • [9] Filter Pruning Algorithm Based on Deep Reinforcement Learning
    Liu Y.
    Teng Y.
    Niu T.
    Zhi J.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (03): : 31 - 36
  • [10] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202