Adversarial robustness evaluation of multiple-source remote sensing image recognition based on deep neural networks

被引:0
作者
Sun H. [1 ]
Xu Y. [1 ]
Chen J. [2 ]
Lei L. [1 ]
Ji K. [1 ]
Kuang G. [1 ]
机构
[1] College of Electronic Science, National University of Defense Technology, Changsha
[2] Beijing Institute of Remote Sensing Information, Beijing
基金
中国国家自然科学基金;
关键词
adversarial attack; adversarial robustness evaluation; deep neural networks; feature visualization; multiple source remote sensing images;
D O I
10.11834/jrs.20210597
中图分类号
学科分类号
摘要
Deep-neural-network-based multiple-source remote sensing image recognition systems have been widely used in many military scenarios, such as in aerospace intelligence reconnaissance, unmanned aerial vehicles for autonomous environmental cognition, and multimode automatic target recognition systems. Deep learning models rely on the assumption that the training and testing data are from the same distribution. However, these models show poor performance under common corruption or adversarial attacks. In the remote sensing community, the adversarial robustness of deep-neural-network-based recognition models have not received much attention, thence increasing the risk for many security-sensitive applications. This article evaluates the adversarial robustness of deep-neural-network-based recognition models for multiple-source remote sensing images. First, we discuss the incompleteness of deep learning theory and reveal the presence of great security risks. The independent identical distribution assumption is often violated, and the system performance cannot be guaranteed under adversarial scenarios. The whole process chain of a deep-neural-network-based image recognition system is then analyzed for its vulnerabilities. Second, we introduce several representative algorithms for adversarial example generation under both the white- and black-box settings. The gradient-propagation-based visualization method is also proposed for analyzing adversarial attacks. We perform a detailed evaluation of nine deep neural networks across two publicly available remote sensing image datasets. Both optical remote sensing and SAR remote sensing images are used in our experiments. For each model, we generate seven perturbations, ranging from gradient-based optimization to unsupervised feature distortion, for each testing image. In all cases, we observe a significant reduction in average classification accuracy between the original clean data and their adversarial images. Apart from adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, hence contributing to the present understanding of the vulnerability of deep learning models. Experimental results demonstrate that all deep neural networks have suffered great losses in classification accuracy when the testing images are adversarial examples. Understanding such adversarial phenomena improves our understanding of the inner workings of deep learning models. Additional efforts are needed to enhance the adversarial robustness of deep learning models. © 2023 National Remote Sensing Bulletin
引用
收藏
页码:1951 / 1963
页数:12
相关论文
共 24 条
[1]  
Berghoff C, Neu M, Von Twickel A, Vulnerabilities of connectionist AI applications: evaluation and defence, (2020)
[2]  
Blasch E, Self-proficiency assessment for ATR systems, Proceedings of SPIE 11393, Algorithms for Synthetic Aperture Radar Imagery XXVII, (2020)
[3]  
Carlini N, Wagner D, Towards evaluating the robustness of neural networks, Proceedings of 2017 IEEE Symposium on Security and Privacy, pp. 39-57, (2017)
[4]  
Chen J B, Jordan M I, Wainwright M J, HopSkipJumpAttack: a query-efficient decision-based attack, Proceedings of 2020 IEEE Symposium on Security and Privacy, pp. 1277-1294, (2020)
[5]  
Chen P Y, Sharma Y, Zhang H, Yi J F, Hsieh C J, EAD: elastic-net attacks to deep neural networks via adversarial examples, Proceedings of the 32nd AAAI Conference on Artificial Intelligence, (2018)
[6]  
Cheng G, Xie X X, Han J W, Guo L, Xia G S, Remote sensing image scene classification meets deep learning: challenges, methods, benchmarks, and opportunities, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, pp. 3735-3756, (2020)
[7]  
Fawzi A, Moosavi-Dezfooli S M, Frossard P, The robustness of deep networks: a geometrical perspective, IEEE Signal Processing Magazine, 34, 6, pp. 50-62, (2017)
[8]  
Goodfellow I J, Shlens J, Szegedy C, Explaining and harnessing adversarial examples, Proceedings of the 3rd International Conference on Learning Representations, (2015)
[9]  
Kurte K R, Durbha S S, King R L, Younan N H, Vatsavai R, Semantics-enabled framework for spatial image information mining of linked earth observation data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10, 1, pp. 29-44, (2017)
[10]  
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A, Towards deep learning models resistant to adversarial attacks, Proceedings of the 6th International Conference on Learning Representations, (2018)