Really natural adversarial examples

被引:0
|
作者
Anibal Pedraza
Oscar Deniz
Gloria Bueno
机构
[1] VISILAB,
[2] ETSI Industriales,undefined
来源
International Journal of Machine Learning and Cybernetics | 2022年 / 13卷
关键词
Natural adversarial; Adversarial examples; Trustworthy machine learning; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
The phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.
引用
收藏
页码:1065 / 1077
页数:12
相关论文
共 50 条
  • [1] Really natural adversarial examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (04) : 1065 - 1077
  • [2] Using Adversarial Examples in Natural Language Processing
    Belohlavek, Petr
    Platek, Ondrej
    Zabokrtsky, Zdenek
    Straka, Milan
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3693 - 3700
  • [3] Detecting chaos in adversarial examples
    Deniz, Oscar
    Pedraza, Anibal
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 163
  • [4] Generating Valid and Natural Adversarial Examples with Large Language Models
    Wang, Zimu
    Wang, Wei
    Chen, Qi
    Wang, Qiufeng
    Anh Nguyen
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1716 - 1721
  • [5] NATURAL-LOOKING ADVERSARIAL EXAMPLES FROM FREEHAND SKETCHES
    Kim, Hak Gu
    Nanni, Davide
    Suesstrunk, Sabine
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3723 - 3727
  • [6] Generating natural adversarial examples with universal perturbations for text classification
    Gao, Haoran
    Zhang, Hua
    Yang, Xingguo
    Li, Wenmin
    Gao, Fei
    Wen, Qiaoyan
    NEUROCOMPUTING, 2022, 471 : 175 - 182
  • [7] ADVERSARIAL EXAMPLES FOR GOOD: ADVERSARIAL EXAMPLES GUIDED IMBALANCED LEARNING
    Zhang, Jie
    Zhang, Lei
    Li, Gang
    Wu, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 136 - 140
  • [8] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
  • [9] On the Salience of Adversarial Examples
    Fernandez, Amanda
    ADVANCES IN VISUAL COMPUTING, ISVC 2019, PT II, 2019, 11845 : 221 - 232
  • [10] Lyapunov stability for detecting adversarial image examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 155