Mask-guided noise restriction adversarial attacks for image classification

被引:15
作者
Duan, Yexin [1 ,2 ]
Zhou, Xingyu [3 ]
Zou, Junhua [1 ]
Qiu, Junyang [4 ]
Zhang, Jin [2 ]
Pan, Zhisong [1 ]
机构
[1] Army Engn Univ PLA, Command & Control Engn Coll, Nanjing, Peoples R China
[2] Army Mil Transportat Univ PLA, Zhenjiang Campus, Zhenjiang, Jiangsu, Peoples R China
[3] Army Engn Univ PLA, Commun Engn Coll, Nanjing, Peoples R China
[4] Jiangnan Inst Comp Technol, Wuxi, Jiangsu, Peoples R China
关键词
Deep neural network; Noise restriction; Adversarial example; Transferability; Adversarial attack;
D O I
10.1016/j.cose.2020.102111
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are generated by adding small noises to the benign examples, but make a deep model output inaccurate predictions. The noises are often imperceptible to humans, but are more likely to be perceived for the images with plain backgrounds or increased noise size. To address this issue, we propose a mask-guided adversarial attack method to remove the noises of semantically irrelevant regions in the backgrounds and make the adversarial noises more imperceptible. In addition, we enhance the transferability of the adversarial examples by rotation input strategy. We first convert the image saliency maps produced by the salient object detection technique to binary masks, then we combine the proposed rotation input strategy with iterative attack method to generate stronger adversarial images, and use the binary masks to restrict the noises to the salient objects/regions at each iteration. Experimental results show that the noises of the resultant adversarial examples are far less visible than the vanilla global noise adversarial examples, and our best attack reaches an average success rate of 85.9% under the black-box attack setting, demonstrating the effectiveness of the proposed method. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 21 条
[1]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[2]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[3]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[4]  
Goodfellow R, 2015, OIL AND GAS PIPELINES: INTEGRITY AND SAFETY HANDBOOK, P3
[5]  
He K., P IEEE C COMP VIS PA, P770, DOI [10.1007/978-3-319-46493-0_38, DOI 10.1007/978-3-319-46493-0_38]
[6]   Identity Mappings in Deep Residual Networks [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :630-645
[7]  
Kurakin A., 2018, Artificial Intelligence Safety and Security, P99, DOI [10.1201/9781351251389, DOI 10.1201/9781351251389]
[8]   Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser [J].
Liao, Fangzhou ;
Liang, Ming ;
Dong, Yinpeng ;
Pang, Tianyu ;
Hu, Xiaolin ;
Zhu, Jun .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1778-1787
[9]  
Lin Jiadong, 2020, 8 INT C LEARN REPR
[10]  
Liu Y., 2017, 5 INT C LEARN REPR I