Really natural adversarial examples

被引:0
|
作者
Anibal Pedraza
Oscar Deniz
Gloria Bueno
机构
[1] VISILAB,
[2] ETSI Industriales,undefined
来源
International Journal of Machine Learning and Cybernetics | 2022年 / 13卷
关键词
Natural adversarial; Adversarial examples; Trustworthy machine learning; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
The phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.
引用
收藏
页码:1065 / 1077
页数:12
相关论文
共 50 条
  • [11] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [12] On the Relationship between Generalization and Robustness to Adversarial Examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    SYMMETRY-BASEL, 2021, 13 (05):
  • [13] Enhancing the transferability of adversarial examples on vision transformers
    Guan, Yujiao
    Yang, Haoyu
    Qu, Xiaotong
    Wang, Xiaodong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [14] Generation and Countermeasures of adversarial examples on vision: a survey
    Liu, Jiangfan
    Li, Yishan
    Guo, Yanming
    Liu, Yu
    Tang, Jun
    Nie, Ying
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [15] Generating Adversarial Examples With Conditional Generative Adversarial Net
    Yu, Ping
    Song, Kaitao
    Lu, Jianfeng
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 676 - 681
  • [16] Survey on Generating Adversarial Examples
    Pan W.-W.
    Wang X.-Y.
    Song M.-L.
    Chen C.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (01): : 67 - 81
  • [17] Adversarial Examples in Remote Sensing
    Czaja, Wojciech
    Fendley, Neil
    Pekala, Michael
    Ratto, Christopher
    Wang, I-Jeng
    26TH ACM SIGSPATIAL INTERNATIONAL CONFERENCE ON ADVANCES IN GEOGRAPHIC INFORMATION SYSTEMS (ACM SIGSPATIAL GIS 2018), 2018, : 408 - 411
  • [18] On The Generation of Unrestricted Adversarial Examples
    Khoshpasand, Mehrgan
    Ghorbani, Ali
    50TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS (DSN-W 2020), 2020, : 9 - 15
  • [19] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [20] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296