Sensitive region-aware black-box adversarial attacks

被引:9
作者
Lin, Chenhao [1 ]
Han, Sicong [1 ]
Zhu, Jiongli [2 ]
Li, Qian [1 ]
Shen, Chao [1 ]
Zhang, Youwei [3 ]
Guan, Xiaohong [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Cyber Sci & Engn, 28 West Xianning Rd, Xian 710049, Shaanxi, Peoples R China
[2] Univ Calif San Diego, 9500 Gilman Dr, La Jolla, CA 92093 USA
[3] Zhengzhou Xinda Inst Adv Technol, 55 Lianhua St, Zhengzhou 450001, Henan, Peoples R China
基金
中国博士后科学基金;
关键词
Deep learning; Adversarial example; Sensitive region; Imperception attack; EVOLUTION;
D O I
10.1016/j.ins.2023.04.008
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent research on adversarial attacks has highlighted the vulnerability of deep neural networks (DNNs) to perturbations. While existing studies generate adversarial perturbations spread across the entire image, these global perturbations may be visible to human eyes, reducing their effectiveness in real-world scenarios. To alleviate this issue, recent works propose to modify a limited number of input pixels to implement adversarial attacks. However, these approaches still have limitations in terms of both imperceptibility and efficiency. This paper proposes a novel plug-in framework called Sensitive Region-Aware Attack (SRA) to generate soft-label black-box adversarial examples using the sensitivity map and evolution strategies. First, a transferable black-box sensitivity map generation approach is proposed for identifying the sensitive regions of input images. To perform SRA with a limited amount of perturbed pixels, a dynamic l(0) and l(infinity) adjustment strategy is introduced. Furthermore, an adaptive evolution strategy is employed to optimize the selection of generated sensitive regions, allowing for the execution of effective and imperceptible attacks. Experimental results demonstrate that our SRA achieves an imperceptible soft-label black-box attack with a 96.43% success rate using less than 20% of the image pixels on ImageNet and a 100% success rate using 30% of the image pixels on CIFAR-10.
引用
收藏
页数:16
相关论文
共 50 条
[1]   BARF: A new direct and cross-based binary residual feature fusion with uncertainty-aware module for medical image classification [J].
Abdar, Moloud ;
Fahami, Mohammad Amin ;
Chakrabarti, Satarupa ;
Khosravi, Abbas ;
Plawiak, Pawel ;
Acharya, U. Rajendra ;
Tadeusiewicz, Ryszard ;
Nahavandi, Saeid .
INFORMATION SCIENCES, 2021, 577 (577) :353-378
[2]   Adversarial example detection for DNN models: a review and experimental comparison [J].
Aldahdooh, Ahmed ;
Hamidouche, Wassim ;
Fezza, Sid Ahmed ;
Deforges, Olivier .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (06) :4403-4462
[3]  
[Anonymous], 2017, RELIABLE MACHINE LEA
[4]   Evolution strategies – A comprehensive introduction [J].
Hans-Georg Beyer ;
Hans-Paul Schwefel .
Natural Computing, 2002, 1 (1) :3-52
[5]  
Brendel W., 2017, arXiv
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]   HopSkipJumpAttack: A Query-Efficient Decision-Based Attack [J].
Chen, Jianbo ;
Jordan, Michael, I ;
Wainwright, Martin J. .
2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2020), 2020, :1277-1294
[8]   ACT-Detector: Adaptive channel transformation-based light-weighted detector for adversarial attacks [J].
Chen, Jinyin ;
Zheng, Haibin ;
Shangguan, Wenchang ;
Liu, Liangying ;
Ji, Shouling .
INFORMATION SCIENCES, 2021, 564 :163-192
[9]   POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm [J].
Chen, Jinyin ;
Su, Mengmeng ;
Shen, Shijing ;
Xiong, Hui ;
Zheng, Haibin .
COMPUTERS & SECURITY, 2019, 85 :89-106
[10]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448