PISA: Pixel skipping-based attentional black-box adversarial attack

被引:6
作者
Wang, Jie [1 ]
Yin, Zhaoxia [2 ]
Jiang, Jing [3 ]
Tang, Jin [1 ]
Luo, Bin [1 ]
机构
[1] Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Anhui, Peoples R China
[2] East China Normal Univ, Sch Commun & Elect Engn, Shanghai 200241, Peoples R China
[3] Anqing Normal Univ, Sch Comp & Informat, Anqing 246133, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial example; Black -box attack; Curse of dimensionality; Attention map; Pixel skipping; EVOLUTIONARY ALGORITHMS; OPTIMIZATION;
D O I
10.1016/j.cose.2022.102947
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The studies on black-box and evolutionary algorithm-based adversarial attacks have become increasingly popular due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs). However, the performance of these emerging attacks is negatively impacted when fooling DNNs tailored for high-resolution images. One of the explanations is that they usually focus on attacking the entire im-age, regardless of its spatial semantic information, and thereby encounter the notorious curse of dimen-sionality. To this end, we propose a pixel skipping and evolutionary algorithm-based attentional black-box adversarial attack, termed PISA. In PISA, only one of every two neighboring pixels in the salient region is recognized as the target by leveraging the attention map and pixel skipping, such that the dimen-sion of the black-box attack reduces. After that, PISA allows the embedding of an arbitrary multiobjective evolutionary algorithm, which is employed to traverse the reduced pixels and finally generates effective perturbations that are imperceptible by human vision. Extensive experimental results have demonstrated that the proposed PISA is more competitive in attacking high-resolution images than existing black-box and evolutionary algorithm-based attacks. (c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 53 条
[1]   Neural network laundering: Removing black-box backdoor watermarks from deep neural networks [J].
Aiken, William ;
Kim, Hyoungshick ;
Woo, Simon ;
Ryoo, Jungwoo .
COMPUTERS & SECURITY, 2021, 106
[2]  
Alzantot M., 2018, P 2018 C EMPIRICAL M, P2890, DOI 10.18653/v1/D18-1316
[3]  
Alzantot M, 2018, Arxiv, DOI arXiv:1801.00554
[4]   GenAttack: Practical Black-box Attacks with Gradient-Free Optimization [J].
Alzantot, Moustafa ;
Sharma, Yash ;
Chakraborty, Supriyo ;
Zhang, Huan ;
Hsieh, Cho-Jui ;
Srivastava, Mani B. .
PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'19), 2019, :1111-1119
[5]  
Brendel W., 2018, 6 INT C LEARN REPR I
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]   FineFool: A novel DNN object contour attack on image recognition based on the attention perturbation adversarial technique [J].
Chen, Jinyin ;
Zheng, Haibin ;
Xiong, Hui ;
Chen, Ruoxi ;
Du, Tianyu ;
Hong, Zhen ;
Ji, Shouling .
COMPUTERS & SECURITY, 2021, 104
[8]   RCA-SOC: A novel adversarial defense by refocusing on critical areas and strengthening object contours [J].
Chen, Jinyin ;
Zheng, Haibin ;
Chen, Ruoxi ;
Xiong, Hui .
COMPUTERS & SECURITY, 2020, 96
[9]   POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm [J].
Chen, Jinyin ;
Su, Mengmeng ;
Shen, Shijing ;
Xiong, Hui ;
Zheng, Haibin .
COMPUTERS & SECURITY, 2019, 85 :89-106
[10]  
Chen PY, 2017, P 10 ACM WORKSH ART, P15, DOI [10.1145/3128572.3140448, DOI 10.1145/3128572.3140448]