Adversarial robustness evaluation of multiple-source remote sensing image recognition based on deep neural networks

被引:0
作者
Sun H. [1 ]
Xu Y. [1 ]
Chen J. [2 ]
Lei L. [1 ]
Ji K. [1 ]
Kuang G. [1 ]
机构
[1] College of Electronic Science, National University of Defense Technology, Changsha
[2] Beijing Institute of Remote Sensing Information, Beijing
基金
中国国家自然科学基金;
关键词
adversarial attack; adversarial robustness evaluation; deep neural networks; feature visualization; multiple source remote sensing images;
D O I
10.11834/jrs.20210597
中图分类号
学科分类号
摘要
Deep-neural-network-based multiple-source remote sensing image recognition systems have been widely used in many military scenarios, such as in aerospace intelligence reconnaissance, unmanned aerial vehicles for autonomous environmental cognition, and multimode automatic target recognition systems. Deep learning models rely on the assumption that the training and testing data are from the same distribution. However, these models show poor performance under common corruption or adversarial attacks. In the remote sensing community, the adversarial robustness of deep-neural-network-based recognition models have not received much attention, thence increasing the risk for many security-sensitive applications. This article evaluates the adversarial robustness of deep-neural-network-based recognition models for multiple-source remote sensing images. First, we discuss the incompleteness of deep learning theory and reveal the presence of great security risks. The independent identical distribution assumption is often violated, and the system performance cannot be guaranteed under adversarial scenarios. The whole process chain of a deep-neural-network-based image recognition system is then analyzed for its vulnerabilities. Second, we introduce several representative algorithms for adversarial example generation under both the white- and black-box settings. The gradient-propagation-based visualization method is also proposed for analyzing adversarial attacks. We perform a detailed evaluation of nine deep neural networks across two publicly available remote sensing image datasets. Both optical remote sensing and SAR remote sensing images are used in our experiments. For each model, we generate seven perturbations, ranging from gradient-based optimization to unsupervised feature distortion, for each testing image. In all cases, we observe a significant reduction in average classification accuracy between the original clean data and their adversarial images. Apart from adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, hence contributing to the present understanding of the vulnerability of deep learning models. Experimental results demonstrate that all deep neural networks have suffered great losses in classification accuracy when the testing images are adversarial examples. Understanding such adversarial phenomena improves our understanding of the inner workings of deep learning models. Additional efforts are needed to enhance the adversarial robustness of deep learning models. © 2023 National Remote Sensing Bulletin
引用
收藏
页码:1951 / 1963
页数:12
相关论文
共 24 条
[21]  
Yang Y, Newsam S, Bag-of-visual-words and spatial extensions for land-use classification, Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270-279, (2010)
[22]  
Yuan X Y, He P, Zhu Q L, Li X L, Adversarial examples: attacks and defenses for deep learning, IEEE Transactions on Neural Networks and Learning Systems, 30, 9, pp. 2805-2824, (2019)
[23]  
Zhou B L, Khosla A, Lapedriza A, Oliva A, Torralba A, Learning deep features for discriminative localization, Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, (2016)
[24]  
Zhu X X, Montazeri S, Ali M, Hua Y S, Wang Y Y, Mou L C, Shi Y L, Xu F, Bamler R, Deep learning meets SAR, (2020)