Dynamic defenses and the transferability of adversarial examples

被引:0
作者
Thomas, Sam [1 ]
Koleini, Farnoosh [1 ]
Tabrizi, Nasseh [1 ]
机构
[1] East Carolina Univ, Dept Comp Sci, Greenville, NC 27858 USA
来源
2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA | 2022年
关键词
adversarial machine learning; black-box attacks; dynamic defenses;
D O I
10.1109/TPS-ISA56441.2022.00041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little-explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.
引用
收藏
页码:276 / 284
页数:9
相关论文
共 50 条
  • [21] Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection
    Li, Deqiang
    Li, Qianmu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 3886 - 3900
  • [22] Evaluating Adversarial Robustness of Secret Key-Based Defenses
    Ali, Ziad Tariq Muhammad
    Mohammed, Ameer
    Ahmad, Imtiaz
    IEEE ACCESS, 2022, 10 : 34872 - 34882
  • [23] SAGE: Steering the Adversarial Generation of Examples With Accelerations
    Zhao, Ziming
    Li, Zhaoxuan
    Zhang, Fan
    Yang, Ziqi
    Luo, Shuang
    Li, Tingting
    Zhang, Rui
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 789 - 803
  • [24] A Deeper Analysis of Adversarial Examples in Intrusion Detection
    Merzouk, Mohamed Amine
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    RISKS AND SECURITY OF INTERNET AND SYSTEMS (CRISIS 2020), 2021, 12528 : 67 - 84
  • [25] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [26] Poster: Adversarial Examples for Hate Speech Classifiers
    Oak, Rajvardhan
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2621 - 2623
  • [27] Adversarial examples for network intrusion detection systems
    Sheatsley, Ryan
    Papernot, Nicolas
    Weisman, Michael J.
    Verma, Gunjan
    McDaniel, Patrick
    JOURNAL OF COMPUTER SECURITY, 2022, 30 (05) : 727 - 752
  • [28] Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY '21), 2021, : 17 - 27
  • [29] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [30] Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
    Alecci, Marco
    Conti, Mauro
    Marchiori, Francesco
    Martinelli, Luca
    Pajola, Luca
    PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 315 - 329