ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS

被引:2
|
作者
Lim, Hyuntak [1 ]
Roh, Si-Dong [1 ]
Park, Sangki [1 ]
Chung, Ki-Seok [1 ]
机构
[1] Hanyang Univ, Dept Elect Engn, Seoul, South Korea
来源
2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP) | 2021年
关键词
Deep Learning; Adversarial Attack; Adversarial Training; Filter Pruning;
D O I
10.1109/MLSP52302.2021.9596121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1% accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [2] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    ALGORITHMS, 2024, 17 (04)
  • [4] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [5] Chaotic neural network quantization and its robustness against adversarial attacks
    Osama, Alaa
    Gadallah, Samar I.
    Said, Lobna A.
    Radwan, Ahmed G.
    Fouda, Mohammed E.
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [6] Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
    Nomura, Osamu
    Sakemi, Yusuke
    Hosomi, Takeo
    Morie, Takashi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3640 - 3644
  • [7] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Leo Schwinn
    René Raab
    An Nguyen
    Dario Zanca
    Bjoern Eskofier
    Applied Intelligence, 2023, 53 : 19843 - 19859
  • [8] ECG-ATK-GAN: Robustness Against Adversarial Attacks on ECGs Using Conditional Generative Adversarial Networks
    Hossain, Khondker Fariha
    Kamran, Sharif Amit
    Tavakkoli, Alireza
    Ma, Xingjun
    APPLICATIONS OF MEDICAL ARTIFICIAL INTELLIGENCE, AMAI 2022, 2022, 13540 : 68 - 78
  • [9] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Schwinn, Leo
    Raab, Rene
    Nguyen, An
    Zanca, Dario
    Eskofier, Bjoern
    APPLIED INTELLIGENCE, 2023, 53 (17) : 19843 - 19859
  • [10] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741