Really natural adversarial examples

被引:0
|
作者
Anibal Pedraza
Oscar Deniz
Gloria Bueno
机构
[1] VISILAB,
[2] ETSI Industriales,undefined
来源
International Journal of Machine Learning and Cybernetics | 2022年 / 13卷
关键词
Natural adversarial; Adversarial examples; Trustworthy machine learning; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
The phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.
引用
收藏
页码:1065 / 1077
页数:12
相关论文
共 50 条
  • [31] An approach to improve transferability of adversarial examples
    Zhang, Weihan
    Guo, Ying
    PHYSICAL COMMUNICATION, 2024, 64
  • [32] On the robustness of randomized classifiers to adversarial examples
    Pinot, Rafael
    Meunier, Laurent
    Yger, Florian
    Gouy-Pailler, Cedric
    Chevaleyre, Yann
    Atif, Jamal
    MACHINE LEARNING, 2022, 111 (09) : 3425 - 3457
  • [33] Learning Indistinguishable and Transferable Adversarial Examples
    Zhang, Wu
    Zou, Junhua
    Duan, Yexin
    Zhou, Xingyu
    Pan, Zhisong
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 152 - 164
  • [34] Creating valid adversarial examples of malware
    Kozak, Matous
    Jurecek, Martin
    Stamp, Mark
    Di Troia, Fabio
    JOURNAL OF COMPUTER VIROLOGY AND HACKING TECHNIQUES, 2024, 20 (04) : 607 - 621
  • [35] HOW SECURE ARE THE ADVERSARIAL EXAMPLES THEMSELVES?
    Zeng, Hui
    Deng, Kang
    Chen, Biwei
    Peng, Anjie
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2879 - 2883
  • [36] Analysing Adversarial Examples for Deep Learning
    Jung, Jason
    Akhtar, Naveed
    Hassan, Ghulam
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, 2021, : 585 - 592
  • [37] A General Framework for Adversarial Examples with Objectives
    Sharif, Mahmood
    Bhagavatula, Sruti
    Bauer, Lujo
    Reiter, Michael K.
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2019, 22 (03)
  • [38] On the robustness of randomized classifiers to adversarial examples
    Rafael Pinot
    Laurent Meunier
    Florian Yger
    Cédric Gouy-Pailler
    Yann Chevaleyre
    Jamal Atif
    Machine Learning, 2022, 111 : 3425 - 3457
  • [39] Generating Adversarial Examples by Adversarial Networks for Semi-supervised Learning
    Ma, Yun
    Mao, Xudong
    Chen, Yangbin
    Li, Qing
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2019, 2019, 11881 : 115 - 129
  • [40] Adversarial Training Defense Based on Second-order Adversarial Examples
    Qian Yaguan
    Zhang Ximin
    Wang Bin
    Gu Zhaoquan
    Li Wei
    Yun Bensheng
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (11) : 3367 - 3373