Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters

被引:0
作者
Wang, Mingde [1 ]
Liu, Zhijing [1 ]
机构
[1] Xidian Univ, Comp Informat Applicat Res Ctr, Sch Comp Sci & Technol, Xian 710071, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
adversarial attack; deep learning; defense method; machine learning; ROBUSTNESS;
D O I
10.3390/app14188119
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these perturbations are difficult for humans to detect, they can easily mislead deep learning classifiers. Noise injection, as a defense mechanism, can offer a provable defense against adversarial attacks by reducing sensitivity to subtle input changes. However, these methods face issues of computational complexity and limited adaptability. We propose a multilayer filter defense model, drawing inspiration from filter-based image denoising techniques. This model inserts a filtering layer after the input layer and before the convolutional layer, and incorporates noise injection techniques during the training process. This model substantially enhances the resilience of image classification systems to adversarial attacks. We also investigated the impact of various filter combinations, filter area sizes, standard deviations, and filter layers on the effectiveness of defense. The experimental results indicate that, across the MNIST, CIFAR10, and CIFAR100 datasets, the multilayer filter defense model achieves the highest average accuracy when employing a double-layer Gaussian filter (filter area size of 3x3, standard deviation of 1). We compared our method with two filter-based defense models, and the experimental results demonstrated that our method attained an average accuracy of 71.9%, effectively enhancing the robustness of the image recognition classifier against adversarial attacks. This method not only performs well on small-scale datasets but also exhibits robustness on large-scale datasets (miniImageNet) and modern models (EfficientNet and WideResNet).
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [42] Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
    Rahman, Mafizur
    Roy, Prosenjit
    Frizell, Sherri S.
    Qian, Lijun
    IEEE ACCESS, 2025, 13 : 35230 - 35242
  • [43] Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and Defense
    Alotaibi, Afnan
    Rassam, Murad A.
    FUTURE INTERNET, 2023, 15 (02)
  • [44] Understanding adversarial attacks on deep learning based medical image analysis systems
    Ma, Xingjun
    Niu, Yuhao
    Gu, Lin
    Yisen, Wang
    Zhao, Yitian
    Bailey, James
    Lu, Feng
    PATTERN RECOGNITION, 2021, 110
  • [45] 2N labeling defense method against adversarial attacks by filtering and extended class label set
    Szucs, Gabor
    Kiss, Richard
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (11) : 16717 - 16740
  • [46] 2N labeling defense method against adversarial attacks by filtering and extended class label set
    Gábor Szűcs
    Richárd Kiss
    Multimedia Tools and Applications, 2023, 82 : 16717 - 16740
  • [47] ADVERSARIAL ATTACKS ON RADAR TARGET RECOGNITION BASED ON DEEP LEARNING
    Zhou, Jie
    Peng, Bo
    Peng, Bowen
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 2646 - 2649
  • [48] AdvCapsNet: To defense adversarial attacks based on Capsule networks*
    Li, Yueqiao
    Su, Hang
    Zhu, Jun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 75
  • [49] The technology of adversarial attacks in signal recognition
    Zhao, Haojun
    Tian, Qiao
    Pan, Lei
    Lin, Yun
    PHYSICAL COMMUNICATION, 2020, 43
  • [50] DeepIris: An ensemble approach to defending Iris recognition classifiers against Adversarial Attacks
    Tamizhiniyan, S. R.
    Ojha, Aman
    Meenakshi, K.
    Maragatham, G.
    2021 INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND INFORMATICS (ICCCI), 2021,