Adversarial attacks and defenses using feature-space stochasticity

被引:4
作者
Ukita, Jumpei [1 ]
Ohki, Kenichi [1 ,2 ,3 ]
机构
[1] Univ Tokyo, Sch Med, Dept Physiol, 7-3-1 Hongo,Bunkyo Ku, Tokyo 1130033, Japan
[2] Int Res Ctr Neurointelligence WPI IRCN, 7-3-1 Hongo,Bunkyo Ku, Tokyo 1130033, Japan
[3] Inst AI & Beyond, 7-3-1 Hongo,Bunkyo Ku, Tokyo 1130033, Japan
关键词
Adversarial attack; Adversarial defense; Feature smoothing; DEEP NEURAL-NETWORKS;
D O I
10.1016/j.neunet.2023.08.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies in deep neural networks have shown that injecting random noise in the input layer of the networks contributes towards tp-norm-bounded adversarial perturbations. However, to defend against unrestricted adversarial examples, most of which are not tp-norm-bounded in the input layer, such input-layer random noise may not be sufficient. In the first part of this study, we generated a novel class of unrestricted adversarial examples termed feature-space adversarial examples. These examples are far from the original data in the input space but adjacent to the original data in a hiddenlayer feature space and far again in the output layer. In the second part of this study, we empirically showed that while injecting random noise in the input layer was unable to defend these feature-space adversarial examples, they were defended by injecting random noise in the hidden layer. These results highlight the novel benefit of stochasticity in higher layers, in that it is useful for defending against these feature-space adversarial examples, a class of unrestricted adversarial examples. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:875 / 889
页数:15
相关论文
共 82 条
  • [11] Coates A., 2011, AISTATS, P215
  • [12] Cohen J, 2019, PR MACH LEARN RES, V97
  • [13] Dapello J., 2020, ADV NEURAL INFORM PR, V33, P13073
  • [14] Dosovitskiy Alexey, 2016, Advances in Neural Information Processing Systems, V29
  • [15] Robust Physical-World Attacks on Deep Learning Visual Classification
    Eykholt, Kevin
    Evtimov, Ivan
    Fernandes, Earlence
    Li, Bo
    Rahmati, Amir
    Xiao, Chaowei
    Prakash, Atul
    Kohno, Tadayoshi
    Song, Dawn
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1625 - 1634
  • [16] Noise in the nervous system
    Faisal, A. Aldo
    Selen, Luc P. J.
    Wolpert, Daniel M.
    [J]. NATURE REVIEWS NEUROSCIENCE, 2008, 9 (04) : 292 - 303
  • [17] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    [J]. SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [18] Fischer M., 2020, NeurIPS'20
  • [19] FDA: Feature Disruptive Attack
    Ganeshan, Aditya
    Vivek, B. S.
    Babu, R. Venkatesh
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8068 - 8078
  • [20] Ghiasi A., 2020, INT C LEARN REPR