Two-Level Feature-Fusion Ship Recognition Strategy Combining HOG Features with Dual-Polarized Data in SAR Images

被引:12
作者
Xie, Hongtu [1 ]
He, Jinfeng [1 ]
Lu, Zheng [2 ]
Hu, Jun [1 ]
机构
[1] Sun Yat Sen Univ, Sch Elect & Commun Engn, Shenzhen Campus, Shenzhen 518107, Peoples R China
[2] China Acad Space Technol, Inst Remote Sensing Satellite, Beijing 100094, Peoples R China
关键词
synthetic aperture radar (SAR); two-level feature-fusion; SAR ship recognition; histogram of oriented gradients (HOG) features; dual-polarized SAR ship images;
D O I
10.3390/rs15184393
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Due to the inherent characteristics of synthetic aperture radar (SAR) imaging, SAR ship features are not obvious and the category distribution is unbalanced, which makes the task of ship recognition in SAR images quite challenging. To address the above problems, a two-level feature-fusion ship recognition strategy combining the histogram of oriented gradients (HOG) features with the dual-polarized data in the SAR images is proposed. The proposed strategy comprehensively utilizes the features extracted by the HOG operator and the shallow and deep features extracted by the Siamese network in the dual-polarized SAR ship images, which can increase the amount of information for the model learning. First, the Siamese network is used to extract the shallow and deep features from the dual-polarized SAR images, and then the HOG feature of the dual-polarized SAR images is also extracted. Furthermore, the bilinear transformation layer is used for fusing the HOG features from dual-polarized SAR images, and the grouping bilinear pooling process is used for fusing the dual-polarized shallow feature and deep feature extracted by the Siamese network, respectively. Finally, the catenation operation is used for fusing the dual-polarized HOG features and dual-polarized shallow feature and deep feature, respectively, which are used for the recognition of the SAR ship targets. Experimental results tested on the OpenSARShip2.0 dataset demonstrate the correctness and effectiveness of the proposed strategy, which can effectively improve the recognition performance of the ship targets by fusing the different level features of the dual-polarized SAR images.
引用
收藏
页数:13
相关论文
共 23 条
  • [1] SAR and Optical Image Registration Based on Deep Learning with Co-Attention Matching Module
    Chen, Jiaxing
    Xie, Hongtu
    Zhang, Lin
    Hu, Jun
    Jiang, Hejun
    Wang, Guoqian
    [J]. REMOTE SENSING, 2023, 15 (15)
  • [2] Dong J., 2014, J. Shaanxi Norm. Univ, V32, P203
  • [3] Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
  • [4] Howard AG, 2017, Arxiv, DOI arXiv:1704.04861
  • [5] Modified Adaptive 2-D Calibration Algorithm for Airborne Multichannel SAR-GMTI
    Ge, Beibei
    An, Daoxiang
    Liu, Jinyuan
    Feng, Dong
    Chen, Leping
    Zhou, Zhimin
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [6] He Jinglu, 2022, 2022 4th International Conference on Natural Language Processing (ICNLP), P315, DOI 10.1109/ICNLP55136.2022.00057
  • [7] Group Bilinear CNNs for Dual-Polarized SAR Ship Classification
    He, Jinglu
    Chang, Wenlong
    Wang, Fuping
    Liu, Ying
    Wang, Yinghua
    Liu, Hongwei
    Li, Yinghua
    Liu, Lei
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [8] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [9] Fast Factorized Backprojection Algorithm in Orthogonal Elliptical Coordinate System for Ocean Scenes Imaging Using Geosynchronous Spaceborne-Airborne VHF UWB Bistatic SAR
    Hu, Xiao
    Xie, Hongtu
    Zhang, Lin
    Hu, Jun
    He, Jinfeng
    Yi, Shiliang
    Jiang, Hejun
    Xie, Kai
    [J]. REMOTE SENSING, 2023, 15 (08)
  • [10] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269