POSES: Patch Optimization Strategies for Efficiency and Stealthiness Using eXplainable AI

被引:0
作者
Lee, Han-Ju [1 ]
Kim, Jin-Seoung [1 ]
Lee, Han-Jin [1 ]
Choi, Seok-Hwan [1 ]
机构
[1] Yonsei Univ, Div Software, Wonju Si 26493, Gangwon Do, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Computational modeling; Deep learning; Explainable AI; Data models; Optimization methods; Linear programming; Generative adversarial networks; Computer vision; Benchmark testing; Adversarial example; adversarial patch; eXplainable AI (XAI);
D O I
10.1109/ACCESS.2025.3555044
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples, which are carefully crafted inputs designed to deceive deep learning models, create significant challenges in Artificial Intelligence. While adversarial examples have primarily focused on digital-world attacks, recent research has proposed adversarial patches as the focus expands to physical-world attacks. Unlike traditional adversarial examples that use small perturbations, adversarial patches employ large perturbations to bypass existing defense mechanisms against adversarial attacks. Adversarial patches have been shown to be highly effective in causing deep learning models to misclassify. However, existing adversarial patches are often limited by their noticeable appearance and the high computational cost of generating them. To solve these problems, we propose a new adversarial patch generation method called Patch Optimization Strategies for Efficiency and Stealthiness (POSES). POSES uses a two-step optimization architecture that employs an eXplainable AI-based method to optimize the location and size of adversarial patches. Experimental results on benchmark datasets demonstrate that POSES enhances the stealthiness of adversarial patches while maintaining a high attack success rate. We also show that POSES improves attack efficiency by reducing the number of iterations required.
引用
收藏
页码:57166 / 57176
页数:11
相关论文
共 31 条
[1]  
Athalye A, 2018, PR MACH LEARN RES, V80
[2]   Adversarial attacks and defenses in explainable artificial intelligence: A survey [J].
Baniecki, Hubert ;
Biecek, Przemyslaw .
INFORMATION FUSION, 2024, 107
[3]  
Brown TB, 2018, Arxiv, DOI arXiv:1712.09665
[4]   Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches [J].
Cheng, Zhiyuan ;
Liang, James ;
Choi, Hongjun ;
Tao, Guanhong ;
Cao, Zhiwen ;
Liu, Dongfang ;
Zhang, Xiangyu .
COMPUTER VISION, ECCV 2022, PT XXXVIII, 2022, 13698 :514-532
[5]  
Croce F, 2022, AAAI CONF ARTIF INTE, P6437
[6]  
Desai S, 2020, IEEE WINT CONF APPL, P972, DOI [10.1109/wacv45572.2020.9093360, 10.1109/WACV45572.2020.9093360]
[7]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[8]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[9]   Naturalistic Physical Adversarial Patch for Object Detectors [J].
Hu, Yu-Chih-Tuan ;
Kung, Bo-Han ;
Tan, Daniel Stanley ;
Chen, Jun-Cheng ;
Hua, Kai-Lung ;
Cheng, Wen-Huang .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :7828-7837
[10]   Dynamic Modeling and Analysis for Electricity-Gas Systems With Electric-Driven Compressors [J].
Huang, Yujia ;
Sun, Qiuye ;
Chen, Zhe ;
Gao, David Wenzhong ;
Pedersen, Torben Bach ;
Larsen, Kim Guldstrand ;
Li, Yushuai .
IEEE TRANSACTIONS ON SMART GRID, 2025, 16 (03) :2144-2155