Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore

被引:0
|
作者
Zhou, Zheng [1 ]
Liu, Ju [1 ]
Han, Yanyang [1 ]
机构
[1] Shandong Univ, Qingdao 266237, Peoples R China
来源
ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT II | 2022年
基金
美国国家科学基金会;
关键词
Adversarial examples; The neural network; Attack & defense; The activation functions; The loss functions; Security of AI; ROBUSTNESS;
D O I
10.1007/978-3-031-09726-3_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are fragile because adversarial examples can readily assault them. As a result of the current scenario, academics from various countries have paid close attention to adversarial examples: many research outcomes, e.g., adversarial and defensive approaches and algorithms. However, numerous people are still baffled about how adversarial examples affect neural networks. We present hypotheses and devise extensive experiments to acquire more information about adversarial examples to verify this notion. By experiments, we investigate the neural network's sensitivity to adversarial examples in diverse aspects, e.g., model architectures, activation functions, and loss functions. The consequence of the experiment shows that adversarial examples are closely related to them. Peculiarly, sensitivity's property could help us distinguish the adversarial examples from the data set. This work will inspire the research of adversarial examples detection.
引用
收藏
页码:155 / 166
页数:12
相关论文
共 9 条
  • [1] Adversarial Examples Detection With Bayesian Neural Network
    Li, Yao
    Tang, Tongyi
    Hsieh, Cho-Jui
    Lee, Thomas C. M.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (05): : 3654 - 3664
  • [2] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [3] Adversarial examples detection based on quantum fuzzy convolution neural network
    Huang, Chenyi
    Zhang, Shibin
    QUANTUM INFORMATION PROCESSING, 2024, 23 (04)
  • [4] SecureAS: A Vulnerability Assessment System for Deep Neural Network Based on Adversarial Examples
    Chu, Yan
    Yue, Xiao
    Wang, Quan
    Wang, Zhengkui
    IEEE ACCESS, 2020, 8 (109156-109167) : 109156 - 109167
  • [5] A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
    Trusov, A. V.
    Limonova, E. E.
    Arlazarov, V. V.
    COMPUTER OPTICS, 2025, 49 (02) : 222 - 252
  • [6] Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property
    Rathi, Pulkit
    Bhadauria, Saumya
    Rathi, Sugandha
    APPLIED ARTIFICIAL INTELLIGENCE, 2022, 36 (01)
  • [7] Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables
    Adeke, James Msughter
    Liu, Guangjie
    Zhao, Junjie
    Wu, Nannan
    Bashir, Hafsat Muhammad
    Davoli, Franco
    FUTURE INTERNET, 2023, 15 (12)
  • [8] Robust deep neural network surrogate models with uncertainty quantification via adversarial training
    Zhang, Lixiang
    Li, Jia
    STATISTICAL ANALYSIS AND DATA MINING, 2023, 16 (03) : 295 - 304
  • [9] Shadow Detection in Single RGB Images Using a Context Preserver Convolutional Neural Network Trained by Multiple Adversarial Examples
    Mohajerani, Sorour
    Saeedi, Parvaneh
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (08) : 4117 - 4129