Dynamic defenses and the transferability of adversarial examples

被引:1
作者
Thomas, Sam [1 ]
Koleini, Farnoosh [1 ]
Tabrizi, Nasseh [1 ]
机构
[1] East Carolina Univ, Dept Comp Sci, Greenville, NC 27858 USA
来源
2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA | 2022年
关键词
adversarial machine learning; black-box attacks; dynamic defenses;
D O I
10.1109/TPS-ISA56441.2022.00041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little-explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.
引用
收藏
页码:276 / 284
页数:9
相关论文
共 50 条
[21]   Adversarial Transferability in Embedded Sensor Systems: An Activity Recognition Perspective [J].
Sah, Ramesh Kumar ;
Ghasemzadeh, Hassan .
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2024, 23 (02)
[22]   Evaluating Adversarial Robustness of Secret Key-Based Defenses [J].
Ali, Ziad Tariq Muhammad ;
Mohammed, Ameer ;
Ahmad, Imtiaz .
IEEE ACCESS, 2022, 10 :34872-34882
[23]   Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection [J].
Li, Deqiang ;
Li, Qianmu .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :3886-3900
[24]   SAGE: Steering the Adversarial Generation of Examples With Accelerations [J].
Zhao, Ziming ;
Li, Zhaoxuan ;
Zhang, Fan ;
Yang, Ziqi ;
Luo, Shuang ;
Li, Tingting ;
Zhang, Rui ;
Ren, Kui .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :789-803
[25]   Deep neural rejection against adversarial examples [J].
Angelo Sotgiu ;
Ambra Demontis ;
Marco Melis ;
Battista Biggio ;
Giorgio Fumera ;
Xiaoyi Feng ;
Fabio Roli .
EURASIP Journal on Information Security, 2020
[26]   A Deeper Analysis of Adversarial Examples in Intrusion Detection [J].
Merzouk, Mohamed Amine ;
Cuppens, Frederic ;
Boulahia-Cuppens, Nora ;
Yaich, Reda .
RISKS AND SECURITY OF INTERNET AND SYSTEMS (CRISIS 2020), 2021, 12528 :67-84
[27]   Poster: Adversarial Examples for Hate Speech Classifiers [J].
Oak, Rajvardhan .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :2621-2623
[28]   Adversarial examples for network intrusion detection systems [J].
Sheatsley, Ryan ;
Papernot, Nicolas ;
Weisman, Michael J. ;
Verma, Gunjan ;
McDaniel, Patrick .
JOURNAL OF COMPUTER SECURITY, 2022, 30 (05) :727-752
[29]   Deep neural rejection against adversarial examples [J].
Sotgiu, Angelo ;
Demontis, Ambra ;
Melis, Marco ;
Biggio, Battista ;
Fumera, Giorgio ;
Feng, Xiaoyi ;
Roli, Fabio .
EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
[30]   Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples [J].
Liu, Guanxiong ;
Khalil, Issa ;
Khreishah, Abdallah .
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY '21), 2021, :17-27