ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS

被引:2
|
作者
Lim, Hyuntak [1 ]
Roh, Si-Dong [1 ]
Park, Sangki [1 ]
Chung, Ki-Seok [1 ]
机构
[1] Hanyang Univ, Dept Elect Engn, Seoul, South Korea
关键词
Deep Learning; Adversarial Attack; Adversarial Training; Filter Pruning;
D O I
10.1109/MLSP52302.2021.9596121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1% accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Exploring misclassifications of robust neural networks to enhance adversarial attacks
    Schwinn, Leo
    Raab, Rene
    Nguyen, An
    Zanca, Dario
    Eskofier, Bjoern
    APPLIED INTELLIGENCE, 2023, 53 (17) : 19843 - 19859
  • [32] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741
  • [33] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Wenjie YAN
    Ziqi LI
    Yongjun QI
    Chinese Journal of Electronics, 2024, 33 (03) : 732 - 741
  • [34] Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities
    Chaudhury, Subhajit
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13714 - 13715
  • [35] Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks
    Ye, Tian
    Kannan, Rajgopal
    Prasanna, Viktor
    Busart, Carl
    2024 IEEE RADAR CONFERENCE, RADARCONF 2024, 2024,
  • [36] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [37] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [38] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [39] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [40] Robustness Against Adversarial Attacks Using Dimensionality
    Chattopadhyay, Nandish
    Chatterjee, Subhrojyoti
    Chattopadhyay, Anupam
    SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2021, 2022, 13162 : 226 - 241