Efficient and Robust Convolutional Neural Networks via Channel Prioritization and Path Ensemble

被引:0
作者
Chang, Chun-Min [1 ]
Lin, Chia-Ching [2 ]
Chen, Kuan-Ta [1 ]
机构
[1] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
[2] Natl Taiwan Univ, Grad Inst Elect Engn, Taipei, Taiwan
来源
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2019年
关键词
efficient inference; model compression; network pruning; security; model ensemble;
D O I
10.1109/ijcnn.2019.8851922
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the growing recognition of both efficiency and security issues in machine learning models, we propose a novel convolutional neural networks (CNNs) training algorithm, called channel prioritization and path ensemble (CPPE), to not only allow dynamically trade-offs between different resource and performance requirements but also enable secure inference without any extra computational cost or memory overhead. Our approach not only prioritizes channels to prune the network in a structured way and ensemble multiple inference paths over different utilization conditions. We demonstrated the effectiveness of channel prioritization by the experiment of the VGG-16 network on various benchmark datasets. The experimental results show that, on the CIFAR-10 dataset, a 10x parameters reduction and a 4x FLOPs reduction can be achieved, with only a 0.2% accuracy drop. Furthermore, we allow CNNs to dynamically trade-offs between resource demand and accuracy with only 4% degradation in accuracy in exchange for 16x FLOPs reduction. By ensembling multiple inference paths, our model can improve robustness against various adversarial attacks without any additional computational cost and memory overhead. Finally, our method is simple and easily applied to any convolutioanl neural networks.
引用
收藏
页数:8
相关论文
共 36 条
  • [21] Ristretto: A Framework for Empirical Study of Resource-Efficient Inference in Convolutional Neural Networks
    Gysel, Philipp
    Pimentel, Jon
    Motamedi, Mohammad
    Ghiasi, Soheil
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (11) : 5784 - 5789
  • [22] Data-Efficient Adaptive Global Pruning for Convolutional Neural Networks in Edge Computing
    Gao, Zhipeng
    Sun, Shan
    Mo, Zijia
    Rui, Lanlan
    Yang, Yang
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6633 - 6638
  • [23] Structural Watermarking to Deep Neural Networks via Network Channel Pruning
    Zhao, Xiangyu
    Yao, Yinzhe
    Wu, Hanzhou
    Zhang, Xinpeng
    2021 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2021, : 14 - 19
  • [24] Gradual Channel Pruning While Training Using Feature Relevance Scores for Convolutional Neural Networks
    Aketi, Sai Aparna
    Roy, Sourjya
    Raghunathan, Anand
    Roy, Kaushik
    IEEE ACCESS, 2020, 8 : 171924 - 171932
  • [25] PSE-Net: Channel pruning for Convolutional Neural Networks with parallel-subnets estimator
    Wang, Shiguang
    Xie, Tao
    Liu, Haijun
    Zhang, Xingcheng
    Cheng, Jian
    NEURAL NETWORKS, 2024, 174
  • [26] DeepImageDroid: A Hybrid Framework Leveraging Visual Transformers and Convolutional Neural Networks for Robust Android Malware Detection
    Chimezie Obidiagha, Collins
    Rahouti, Mohamed
    Hayajneh, Thaier
    IEEE ACCESS, 2024, 12 : 156285 - 156306
  • [27] Towards Efficient Convolutional Neural Networks Through Low-Error Filter Saliency Estimation
    Wang, Zi
    Li, Chengcheng
    Wang, Xiangyang
    Wang, Dali
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2019, 11671 : 255 - 267
  • [28] Rethinking Lightweight Convolutional Neural Networks for Efficient and High-Quality Pavement Crack Detection
    Li, Kai
    Yang, Jie
    Ma, Siwei
    Wang, Bo
    Wang, Shanshe
    Tian, Yingjie
    Qi, Zhiquan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (01) : 237 - 250
  • [29] Selective Pruning of Sparsity-Supported Energy-Efficient Accelerator for Convolutional Neural Networks
    Liu, Chia-Chi
    Zhang, Xuezhi
    Wey, I-Chyn
    Teo, T. Hui
    2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 454 - 461
  • [30] Fpar: filter pruning via attention and rank enhancement for deep convolutional neural networks acceleration
    Chen, Yanming
    Wu, Gang
    Shuai, Mingrui
    Lou, Shubin
    Zhang, Yiwen
    An, Zhulin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (07) : 2973 - 2985