A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness

被引:0
|
作者
Trusov, A. V. [1 ,2 ,3 ]
Limonova, E. E. [1 ,2 ]
Arlazarov, V. V. [1 ,2 ]
机构
[1] Russian Acad Sci, Fed Res Ctr Comp Sci & Control, Vavilova 44,Kor 2, Moscow 119333, Russia
[2] Smart Engines Serv LLC, Pr 60 Letiya Oktyabrya 9, Moscow 117312, Russia
[3] Moscow Inst Phys & Technol, Insgt Skiy Per 9, Dolgoprudnyi 141701, Russia
关键词
adversarial examples; adversarial deep learning; neural networks; neural network security; COMPUTER VISION; EFFICIENT; ATTACKS; PERTURBATIONS; ARCHITECTURE; RECOGNITION;
D O I
10.18287/2412-6179-CO-1494
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can significantly impact the network's output. In this paper, we present a thorough survey of research on adversarial examples, with a primary focus on their impact on neural network classifiers. We closely examine the theoretical capabilities and limitations of artificial neural networks. After that, we explore the discovery and evolution of adversarial examples, starting from basic gradient-based techniques and progressing toward the recent trend of employing generative neural networks for this purpose. We discuss the limited effectiveness of existing countermeasures against adversarial examples. Furthermore, we emphasize that the adversarial examples originate the misalignment between human and neural network decision-making processes. That can be attributed to the current methodology for training neural networks. We also argue that the commonly used term "attack on neural networks" is misleading when discussing adversarial deep learning. Through this paper, our objective is to provide a comprehensive overview of adversarial examples and inspire further researchers to develop more robust neural networks. Such networks will align better with human decision-making processes and enhance the security and reliability of computer vision systems in practical applications.
引用
收藏
页码:222 / 252
页数:31
相关论文
共 16 条
  • [1] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [2] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394
  • [3] Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models
    Ness, Preben M.
    Marijan, Dusica
    Bose, Sunanda
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1907 - 1916
  • [4] Enhancing Robustness Against Adversarial Examples in Network Intrusion Detection Systems
    Hashemi, Mohammad J.
    Keller, Eric
    2020 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (NFV-SDN), 2020, : 37 - 43
  • [5] A SIMPLE STOCHASTIC NEURAL NETWORK FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Yang, Hao
    Wang, Min
    Yu, Zhengfei
    Zhou, Yun
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2297 - 2302
  • [6] SecureAS: A Vulnerability Assessment System for Deep Neural Network Based on Adversarial Examples
    Chu, Yan
    Yue, Xiao
    Wang, Quan
    Wang, Zhengkui
    IEEE ACCESS, 2020, 8 (109156-109167) : 109156 - 109167
  • [7] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [8] Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey
    Zuehlke, Monty-maximilian
    Kudenko, Daniel
    ACM COMPUTING SURVEYS, 2025, 57 (06)
  • [9] Adversarial examples detection based on quantum fuzzy convolution neural network
    Huang, Chenyi
    Zhang, Shibin
    QUANTUM INFORMATION PROCESSING, 2024, 23 (04)
  • [10] Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore
    Zhou, Zheng
    Liu, Ju
    Han, Yanyang
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT II, 2022, : 155 - 166