Adversarial Examples for CNN-Based SAR Image Classification: An Experience Study

被引:51
作者
Li, Haifeng [1 ]
Huang, Haikuo [1 ]
Chen, Li [1 ]
Peng, Jian [1 ]
Huang, Haozhe [1 ]
Cui, Zhenqi [1 ]
Mei, Xiaoming [1 ]
Wu, Guohua [2 ]
机构
[1] Cent South Univ, Sch Geosci & Infophys, Changsha 410083, Peoples R China
[2] Cent South Univ, Sch Traff & Transportat Engn, Changsha 410083, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial example (AE); convolutional neural network (CNN); synthetic aperture radar (SAR); TARGET RECOGNITION;
D O I
10.1109/JSTARS.2020.3038683
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Synthetic aperture radar (SAR) has all-day and all-weather characteristics and plays an extremely important role in the military field. The breakthroughs in deep learning methods represented by convolutional neural network (CNN) models have greatly improved the SAR image recognition accuracy. Classification models based on CNNs can perform high-precision classification, but there are security problems against adversarial examples (AEs). However, the research on AEs is mostly limited to natural images, and remote sensing images (SAR, multispectral, etc.) have not been extensively studied. To explore the basic characteristics of AEs of SAR images (ASIs), we use two classic white-box attack methods to generate ASIs from two SAR image classification datasets and then evaluate the vulnerability of six commonly used CNNs. The results show that ASIs are quite effective in fooling CNNs trained on SAR images, as indicated by the obtained high attack success rate. Due to the structural differences among CNNs, different CNNs present different vulnerabilities in the face of ASIs. We found that ASIs generated by nontarget attack algorithms feature attack selectivity, which is related to the feature space distribution of the original SAR images and the decision boundary of the classification model. We propose the sample-boundary-based AE selectivity distance to successfully explain the attack selectivity of ASIs. We also analyze the effects of image parameters, such as image size and number of channels, on the attack success rate of ASIs through parameter sensitivity. The experimental results of this study provide data support and an effective reference for attacks on and the defense capabilities of various CNNs with regard to AEs in SAR image classification models.
引用
收藏
页码:1333 / 1347
页数:15
相关论文
共 55 条
  • [1] Alzantot Moustafa, 2018, P 2018 C EMP METH NA
  • [2] Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
  • [3] [Anonymous], 2019, ARXIV190408279
  • [4] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 888 - 897
  • [5] Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models
    Boloor, Adith
    He, Xin
    Gill, Christopher
    Vorobeychik, Yevgeniy
    Zhang, Xuan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE AND SYSTEMS (ICESS), 2019,
  • [6] Bose AJ, 2018, IEEE INT WORKSH MULT
  • [7] Bubeck S, 2019, PR MACH LEARN RES, V97
  • [8] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [9] Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
    Carlini, Nicholas
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 1 - 7
  • [10] Cetin A. E., 2009, P IEEE RAD C, P1