Dynamic defenses and the transferability of adversarial examples

被引:0
|
作者
Thomas, Sam [1 ]
Koleini, Farnoosh [1 ]
Tabrizi, Nasseh [1 ]
机构
[1] East Carolina Univ, Dept Comp Sci, Greenville, NC 27858 USA
来源
2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA | 2022年
关键词
adversarial machine learning; black-box attacks; dynamic defenses;
D O I
10.1109/TPS-ISA56441.2022.00041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little-explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.
引用
收藏
页码:276 / 284
页数:9
相关论文
共 50 条
  • [1] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [2] An approach to improve transferability of adversarial examples
    Zhang, Weihan
    Guo, Ying
    PHYSICAL COMMUNICATION, 2024, 64
  • [3] Remix: Towards the transferability of adversarial examples
    Zhao, Hongzhi
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Cai, Xin
    NEURAL NETWORKS, 2023, 163 : 367 - 378
  • [4] StyLess: Boosting the Transferability of Adversarial Examples
    Liang, Kaisheng
    Xiao, Bin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8163 - 8172
  • [5] On the Role of Generalization in Transferability of Adversarial Examples
    Wang, Yilin
    Farnia, Farzan
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2259 - 2270
  • [6] Towards Universal Adversarial Examples and Defenses
    Rakin, Adnan Siraj
    Wang, Ye
    Aeron, Shuchin
    Koike-Akino, Toshiaki
    Moulin, Pierre
    Parsons, Kieran
    2021 IEEE INFORMATION THEORY WORKSHOP (ITW), 2021,
  • [7] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [8] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [9] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [10] Enhancing Transferability of Adversarial Examples with Spatial Momentum
    Wang, Guoqiu
    Yan, Huanqian
    Wei, Xingxing
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 593 - 604