Robust Pol-ISAR Target Recognition Based on ST-MC-DCNN

被引:50
作者
Bai, Xueru [1 ]
Zhou, Xuening [1 ]
Zhang, Feng [1 ]
Wang, Li [2 ]
Xue, Ruihang [1 ]
Zhou, Feng [2 ]
机构
[1] Xidian Univ, Natl Lab Radar Signal Proc, Xian 710071, Peoples R China
[2] Xidian Univ, Key Lab Elect Informat Countermeasure & Simulat T, Minist Educ, Xian 710071, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2019年 / 57卷 / 12期
基金
中国国家自然科学基金;
关键词
Feature extraction; Scattering; Strain; Target recognition; Shape; Azimuth; Image recognition; Automatic target recognition (ATR); deep convolutional neural network (DCNN); image deformation; inverse synthetic aperture radar (ISAR); RANGE SCALING METHOD; EFFICIENT CLASSIFICATION; NEURAL-NETWORKS; SCATTERING; IMAGES; DECOMPOSITION; RESOLUTION; MODELS; 3D;
D O I
10.1109/TGRS.2019.2930112
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Although the deep convolutional neural network (DCNN) has been successfully applied to automatic target recognition (ATR) of ground vehicles based on synthetic aperture radar (SAR), most of the available techniques are not suitable for inverse synthetic aperture radar (ISAR) because they cannot tackle the inherent unknown deformation (e.g., translation, scaling, and rotation) among the training and test samples. To achieve robust polarimetric-ISAR (Pol-ISAR) ATR, this paper proposes the spatial transformer-multi-channel-deep convolutional neural network, i.e., ST-MC-DCNN. In this structure, we adopt the double-layer spatial transformer network (STN) module to adjust the image deformation of each polarimetric channel and then perform a robust hierarchical feature extraction by MC-DCNN. Finally, we carry out feature fusion in the concatenation layer and output the recognition result by the softmax classifier. The proposed network is end-to-end trainable and could learn the optimal deformation parameters automatically from training samples. For the fully Pol-ISAR image database generated from electromagnetic (EM) echoes of four satellites, the proposed structure achieves higher recognition accuracy than traditional DCNN and MC-DCNN. Additionally, it has shown robustness to image scaling, rotation, and combined deformation.
引用
收藏
页码:9912 / 9927
页数:16
相关论文
共 56 条
  • [1] 3D Morphable Models as Spatial Transformer Networks
    Bas, Anil
    Huber, Patrik
    Smith, William A. P.
    Awais, Muhammad
    Kittler, Josef
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 895 - 903
  • [2] Bishop C. M., 2006, PATTERN RECOGNITION, DOI DOI 10.1117/1.2819119
  • [3] RANGE-DOPPLER IMAGING WITH MOTION THROUGH RESOLUTION CELLS
    BROWN, WM
    FREDRICKS, RJ
    [J]. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 1969, AES5 (01) : 98 - +
  • [4] Invariant Scattering Convolution Networks
    Bruna, Joan
    Mallat, Stephane
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1872 - 1886
  • [5] Camp W. W., 2000, Lincoln Laboratory Journal, V12, P267
  • [6] Carrara W. G., 1995, Spotlight Synthetic Aperture Radar: Signal ProcessingAlgorithms
  • [7] Target Classification Using the Deep Convolutional Networks for SAR Images
    Chen, Sizhe
    Wang, Haipeng
    Xu, Feng
    Jin, Ya-Qiu
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2016, 54 (08): : 4806 - 4817
  • [8] Use of 3D ship scatterer models from ISAR image sequences for target recognition
    Cooke, Tristrom
    Martorella, Marco
    Haywood, Brett
    Gibbins, Danny
    [J]. DIGITAL SIGNAL PROCESSING, 2006, 16 (05) : 523 - 532
  • [9] Target Recognition in Synthetic Aperture Radar Images via Matching of Attributed Scattering Centers
    Ding, Baiyuan
    Wen, Gongjian
    Huang, Xiaohong
    Ma, Conghui
    Yang, Xiaoliang
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2017, 10 (07) : 3334 - 3347
  • [10] Image Super-Resolution Using Deep Convolutional Networks
    Dong, Chao
    Loy, Chen Change
    He, Kaiming
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) : 295 - 307