Physically explainable CNN for SAR image classification

被引:58
作者
Huang, Zhongling [1 ]
Yao, Xiwen [1 ]
Liu, Ying [1 ]
Dumitru, Corneliu Octavian [2 ]
Datcu, Mihai [2 ,3 ]
Han, Junwei [1 ]
机构
[1] Northwestern Polytech Univ, Sch Automat, BRain & Artificial Intelligence Lab, BRAIN LAB, Xi'an 710072, Peoples R China
[2] German Aerosp Ctr DLR, Remote Sensing Technol Inst IMF, D-82234 Wessling, Germany
[3] Univ Polytech Bucharest UPB, Bucharest 060042, Romania
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Explainable deep learning; Physical model; SAR image classification; Prior knowledge; SHIP DETECTION; DECOMPOSITION; SCHEME;
D O I
10.1016/j.isprsjprs.2022.05.008
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Integrating the special electromagnetic characteristics of Synthetic Aperture Radar (SAR) in deep neural networks is essential in order to enhance the explainability and physics awareness of deep learning. In this paper, we first propose a novel physically explainable convolutional neural network for SAR image classification, namely physics guided and injected learning (PGIL). It comprises three parts: (1) explainable models (XM) to provide prior physics knowledge, (2) physics guided network (PGN) to encode the knowledge into physics-aware features, and (3) physics injected network (PIN) to adaptively introduce the physics-aware features into classification pipeline for label prediction. A hybrid Image-Physics SAR dataset format is proposed for evaluation, with both Sentinel-1 and Gaofen-3 SAR data being experimented. The results show that the proposed PGIL substantially improve the classification performance in case of limited labeled data compared with the counterpart data driven CNN and other pre-training methods. Additionally, the physics explanations are discussed to indicate the interpretability and the physical consistency preserved in the predictions. We deem the proposed method would promote the development of physically explainable deep learning in SAR image interpretation field.
引用
收藏
页码:25 / 37
页数:13
相关论文
共 33 条
[21]   Ship Detection Based on Complex Signal Kurtosis in Single-Channel SAR Imagery [J].
Leng, Xiangguang ;
Ji, Kefeng ;
Zhou, Shilin ;
Xing, Xiangwei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (09) :6447-6461
[22]   Self-Supervised Learning of Pretext-Invariant Representations [J].
Misra, Ishan ;
van der Maaten, Laurens .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6706-6716
[23]   Physics-induced graph neural network: An application to wind-farm power estimation [J].
Park, Junyoung ;
Park, Jinkyoo .
ENERGY, 2019, 187
[24]   Latent Dirichlet Allocation Models for Image Classification [J].
Rasiwasia, Nikhil ;
Vasconcelos, Nuno .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2665-2679
[25]   A Mutual Information-Based Self-Supervised Learning Model for PolSAR Land Cover Classification [J].
Ren, Bo ;
Zhao, Yangyang ;
Hou, Biao ;
Chanussot, Jocelyn ;
Jiao, Licheng .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (11) :9224-9237
[26]   An artificial target detection method combining a polarimetric feature extractor with deep convolutional neural networks [J].
Sun, Rui ;
Sun, Xiaobing ;
Chen, Feinan ;
Pan, Hao ;
Song, Qiang .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2020, 41 (13) :4995-5009
[27]   Discovery of Semantic Relationships in PolSAR Images Using Latent Dirichlet Allocation [J].
Tanase, Radu ;
Bahmanyar, Reza ;
Schwarz, Gottfried ;
Datcu, Mihai .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (02) :237-241
[28]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[29]   A joint change detection method on complex-valued polarimetric synthetic aperture radar images based on feature fusion and similarity learning [J].
Wang, Chenchen ;
Su, Weimin ;
Gu, Hong .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2021, 42 (13) :4868-4885
[30]   Rotation Awareness Based Self-Supervised Learning for SAR Target Recognition With Limited Training Samples [J].
Wen, Zaidao ;
Liu, Zhunga ;
Zhang, Shuai ;
Pan, Quan .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :7266-7279