Design of robust hyperspectral image classifier based on adversarial training against adversarial attack

被引:0
作者
Park I. [1 ]
Kim S. [1 ]
机构
[1] Department of Electronic Engineering, Yeungnam University
关键词
Adversarial attack; Adversarial defense; Adversarial training; Deep learning; Hyperspectral image;
D O I
10.5302/J.ICROS.2021.21.0012
中图分类号
学科分类号
摘要
Recently, the importance of attack and defense based on adversarial examples has been highlighted in the defense field. In particular, it is necessary to design a robust network because a black box attack can significantly deteriorate the classifier performance of the network, even if a slight change is introduced by attacking the video data. Furthermore, when the details of the network are not known, the network can be fooled through adversarial attacks. In the hyperspectral field, various classifiers have been designed using deep learning. However, these classifiers produce results that are vulnerable to adversarial attacks through the backpropagation process. This paper proposes the design of a hyperspectral classifier that is robust against adversarial attacks. The robustness of the network is demonstrated by analyzing the results of classifying hyperspectral images including various objects such as grass and objects with colors similar to that of grass. It is assumed that the paint pertains to the camouflage of a tank. The results demonstrate the significance of the hyperspectral data used in the defense field in the context of adversarial attacks. Moreover, a useful adversarial training scheme for hyperspectral classifiers is described. © ICROS 2021.
引用
收藏
页码:389 / 400
页数:11
相关论文
共 15 条
[1]  
Hu W., Huang Y., Wei L., Zhang F., Li H., Deep convolutional neural networks for hyperspectral image classification, Journal of Sensors, 2015, (2015)
[2]  
Hyperspectral CNN for Image Classification & Band Selection with Application to Face Recognition
[3]  
Hamida A.B., Benoit A., Lambert P., Amar C.B., 3-D deep learning approach for remote sensing image classification, IEEE Transactions on Geoscience and Remote Sensing, 56, 8, pp. 4420-4434, (2018)
[4]  
Goodfellow I.J., Shlens J., Szegedy C., Explaining and harnessing adversarial examples, 3Rd International Conference on Learning Representations (ICLR 2015), (2015)
[5]  
Kurakin A., Goodfellow I.J., Bengio S., Adversarial examples in the physical world, 5Th International Conference on Learning Representations, ICLR, 2017, (2017)
[6]  
Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A., Towards deep learning models resistant to adversarial attacks, 6Th International Conference on Learning Representations (ICLR 2018), (2018)
[7]  
Carlini N., Wagner D., Towards evaluating the robustness of neural networks, 2017 IEEE Symposium on Security and Privacy (SP), (2017)
[8]  
Shafahi A., Najibi M., Ghiasi M.A., Xu Z., Dickerson J., Studer C., Davis L.S., Taylor G., Goldstein T., Adversarial Training for Free!, 33Rd Conference on Neural Information Processing Systems (Neurips), (2019)
[9]  
Zheng H., Zhang Z., Gu J., Lee H., Prakash A., Efficient Adversarial Training with Transferable Adversarial Examples, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1181-1190, (2020)
[10]  
Radford A., Metz L., Chintala S., Unsupervised Repres Entation Learning with Deep Convolutional Generative Advers Arial Networks