Adversarial Attacks for Image Segmentation on Multiple Lightweight Models

被引:24
作者
Kang, Xu [1 ]
Song, Bin [1 ]
Du, Xiaojiang [2 ]
Guizani, Mohsen [3 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
[3] Qatar Univ, Dept Comp Sci & Engn, Doha 2713, Qatar
基金
中国国家自然科学基金;
关键词
Adversarial samples; image segmentation; joint learning; multi-model attack; perturbations;
D O I
10.1109/ACCESS.2020.2973069
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the powerful ability of data fitting, deep neural networks have been applied in a wide range of applications in many key areas. However, in recent years, it was found that some adversarial samples easily fool the deep neural networks. These input samples are generated by adding a few small perturbations based on the original sample, making a very significant influence on the decision of the target model in the case of not being perceived. Image segmentation is one of the most important technologies in the medical image and automatic driving field. This paper mainly explores the security of deep neural network models based on the image segmentation tasks. Two lightweight image segmentation models on the embedded device suffered from the white-box attack by using local perturbations and universal perturbations. The perturbations are generated indirectly by a noise function and an intermediate variable so that the gradient of pixels can be propagated unlimitedly. Through experiments, we find that different models have different blind spots, and the adversarial samples trained for a single model have no transferability. In the end, multiple models are attacked by our joint learning. Finally, under the constraint of low perturbation, most of the pixels in the attacked area have been misclassified by both lightweight models. The experimental result shows that the proposed adversary is more likely to affect the performance of the segmentation model compared with the FGSM.
引用
收藏
页码:31359 / 31370
页数:12
相关论文
共 41 条
[1]  
[Anonymous], 2013, INTRIGUING PROPERTIE
[2]  
[Anonymous], 2016, ENET DEEP NEURAL NET
[3]  
[Anonymous], 2017, ARXIV170809537
[4]  
[Anonymous], 2019, I C DEPEND SYS NETWO, DOI DOI 10.1109/DSN.2019.00019
[5]  
[Anonymous], ARXIV14122309
[6]  
[Anonymous], 2018, INT C LEARNING REPRE
[7]  
[Anonymous], P 30 ANN C NEUR INF
[8]  
[Anonymous], 2017, ARXIV170300410
[9]   On the Robustness of Semantic Segmentation Models to Adversarial Attacks [J].
Arnab, Anurag ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :888-897
[10]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495