Adversarial example detection by predicting adversarial noise in the frequency domain

被引:0
作者
Seunghwan Jung
Minyoung Chung
Yeong-Gil Shin
机构
[1] Seoul National University,Department of Computer Science and Engineering
[2] Soongsil University,School of Software
来源
Multimedia Tools and Applications | 2023年 / 82卷
关键词
Adversarial example detection; Adversarial noise prediction; Frequency domain classification; Prediction-based adversarial detection;
D O I
暂无
中图分类号
学科分类号
摘要
Recent advances in deep neural network (DNN) techniques have increased the importance of security and robustness of algorithms where DNNs are applied. However, several studies have demonstrated that neural networks are vulnerable to adversarial examples, which are generated by adding crafted adversarial noises to the input images. Because the adversarial noises are typically imperceptible to the human eye, it is difficult to defend DNNs. One method of defense is the detection of adversarial examples by analyzing characteristics of input images. Recent studies have used the hidden layer outputs of the target classifier to improve the robustness but need to access the target classifier. Moreover, there is no post-processing step for the detected adversarial examples. They simply discard the detected adversarial images. To resolve this problem, we propose a novel detection-based method, which predicts the adversarial noise and detects the adversarial example based on the predicted noise without any target classification information. We first generated adversarial examples and adversarial noises, which can be obtained from the residual between the original and adversarial example images. Subsequently, we trained the proposed adversarial noise predictor to estimate the adversarial noise image and trained the adversarial detector using the input images and the predicted noises. The proposed framework has the advantage that it is agnostic to the input image modality. Moreover, the predicted noises can be used to reconstruct the detected adversarial examples as the non-adversarial images instead of discarding the detected adversarial examples. We tested our proposed method against the fast gradient sign method (FGSM), basic iterative method (BIM), projected gradient descent (PGD), Deepfool, and Carlini & Wagner adversarial attack methods on the CIFAR-10 and CIFAR-100 datasets provided by the Canadian Institute for Advanced Research (CIFAR). Our method demonstrated significant improvements in detection accuracy when compared to the state-of-the-art methods and resolved the wastage problem of the detected adversarial examples. The proposed method agnostic to the input image modality demonstrated that the noise predictor successfully captured noise in the Fourier domain and improved the performance of the detection task. Moreover, we resolved the post-processing problem of the detected adversarial examples with the reconstruction process using the predicted noise.
引用
收藏
页码:25235 / 25251
页数:16
相关论文
共 5 条
  • [1] Hochreiter S(1997)Long short-term memory Neural Comput 9 1735-1780
  • [2] Schmidhuber J(2017)Imagenet classification with deep convolutional neural networks Commun ACM 60 84-90
  • [3] Krizhevsky A(undefined)undefined undefined undefined undefined-undefined
  • [4] Sutskever I(undefined)undefined undefined undefined undefined-undefined
  • [5] Hinton GE(undefined)undefined undefined undefined undefined-undefined