Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [31] A NEURO-INSPIRED AUTOENCODING DEFENSE AGAINST ADVERSARIAL ATTACKS
    Bakiskan, Can
    Cekic, Metehan
    Sezer, Ahmet Dundar
    Madhow, Upamanyu
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3922 - 3926
  • [32] Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets
    Santhosh, Poornima
    Gressel, Gilad
    Darling, Michael C.
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 719 - 732
  • [33] Practical Adversarial Attacks Against Speaker Recognition Systems
    Li, Zhuohang
    Shi, Cong
    Xie, Yi
    Liu, Jian
    Yuan, Bo
    Chen, Yingying
    PROCEEDINGS OF THE 21ST INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS (HOTMOBILE'20), 2020, : 9 - 14
  • [34] A Data Augmentation-Based Defense Method Against Adversarial Attacks in Neural Networks
    Zeng, Yi
    Qiu, Han
    Memmi, Gerard
    Qiu, Meikang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 274 - 289
  • [35] Adversarial Attacks Against Face Recognition: A Comprehensive Study
    Vakhshiteh, Fatemeh
    Nickabadi, Ahmad
    Ramachandra, Raghavendra
    IEEE ACCESS, 2021, 9 : 92735 - 92756
  • [36] Adversarial attacks on deep-learning-based SAR image target recognition
    Huang, Teng
    Zhang, Qixiang
    Liu, Jiabao
    Hou, Ruitao
    Wang, Xianmin
    Li, Ya
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2020, 162
  • [37] Perturbation Inactivation Based Adversarial Defense for Face Recognition
    Ren, Min
    Zhu, Yuhao
    Wang, Yunlong
    Sun, Zhenan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2947 - 2962
  • [38] StratDef: Strategic defense against adversarial attacks in ML-based malware detection
    Rashid, Aqib
    Such, Jose
    COMPUTERS & SECURITY, 2023, 134
  • [39] Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
    Zheng, Jiamin
    Zhang, Yaoyuan
    Li, Yuanzhang
    Wu, Shangbo
    Yu, Xiao
    CHINESE JOURNAL OF ELECTRONICS, 2023, 32 (01) : 151 - 158
  • [40] Adversarial Attacks Against IoT Identification Systems
    Kotak, Jaidip
    Elovici, Yuval
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (09) : 7868 - 7883