ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS

被引:2
|
作者
Lim, Hyuntak [1 ]
Roh, Si-Dong [1 ]
Park, Sangki [1 ]
Chung, Ki-Seok [1 ]
机构
[1] Hanyang Univ, Dept Elect Engn, Seoul, South Korea
关键词
Deep Learning; Adversarial Attack; Adversarial Training; Filter Pruning;
D O I
10.1109/MLSP52302.2021.9596121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1% accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Channel Aware Adversarial Attacks are Not Robust
    Sinha, Sujata
    Soysal, Alkan
    MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2023,
  • [22] On the Robustness of Neural-Enhanced Video Streaming against Adversarial Attacks
    Zhou, Qihua
    Guo, Jingcai
    Guo, Song
    Li, Ruibin
    Zhang, Jie
    Wang, Bingjie
    Xu, Zhenda
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 17123 - 17131
  • [23] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [24] Chaotic neural network quantization and its robustness against adversarial attacks
    Osama, Alaa
    Gadallah, Samar I.
    Said, Lobna A.
    Radwan, Ahmed G.
    Fouda, Mohammed E.
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [25] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    ENTROPY, 2023, 25 (06)
  • [26] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    ELECTRONICS, 2024, 13 (03)
  • [27] Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
    Uchendu, Adaku
    Campoy, Daniel
    Menart, Christopher
    Hildenbrandt, Alexandra
    2021 IEEE FOURTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2021), 2021, : 72 - 80
  • [28] Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
    Nomura, Osamu
    Sakemi, Yusuke
    Hosomi, Takeo
    Morie, Takashi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3640 - 3644
  • [29] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
    Guo, Minghao
    Yang, Yuzhe
    Xu, Rui
    Liu, Ziwei
    Lin, Dahua
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 628 - 637
  • [30] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Leo Schwinn
    René Raab
    An Nguyen
    Dario Zanca
    Bjoern Eskofier
    Applied Intelligence, 2023, 53 : 19843 - 19859