Component recognition of ISAR targets via multimodal feature fusion

被引:0
|
作者
Li, Chenxuan [1 ]
Zhu, Weigang [2 ]
Qu, Wei [2 ]
Ma, Fanyin [1 ]
Wang, Rundong [1 ]
机构
[1] Space Engn Univ, Grad Sch, Beijing 101400, Peoples R China
[2] Space Engn Univ, Dept Elect & Opt Engn, Beijing 101400, Peoples R China
关键词
Few-shot; Semantic segmentation; Inverse Synthetic Aperture Radar (ISAR); Scattering; Multimodal fusion; NETWORK;
D O I
10.1016/j.cja.2024.06.031
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Inverse Synthetic Aperture Radar (ISAR) images of complex targets have a low Signal- to-Noise Ratio (SNR) and contain fuzzy edges and large differences in scattering intensity, which limits the recognition performance of ISAR systems. Also, data scarcity poses a greater challenge to the accurate recognition of components. To address the issues of component recognition in complex ISAR targets, this paper adopts semantic segmentation and proposes a few-shot semantic segmentation framework fusing multimodal features. The scarcity of available data is mitigated by using a two-branch scattering feature encoding structure. Then, the high-resolution features are obtained by fusing the ISAR image texture features and scattering quantization information of complex- valued echoes, thereby achieving significantly higher structural adaptability. Meanwhile, the scattering trait enhancement module and the statistical quantification module are designed. The edge texture is enhanced based on the scatter quantization property, which alleviates the segmentation challenge of edge blurring under low SNR conditions. The coupling of query/support samples is enhanced through four-dimensional convolution. Additionally, to overcome fusion challenges caused by information differences, multimodal feature fusion is guided by equilibrium comprehension loss. In this way, the performance potential of the fusion framework is fully unleashed, and the decision risk is effectively reduced. Experiments demonstrate the great advantages of the proposed framework in multimodal feature fusion, and it still exhibits great component segmentation capability under low SNR/edge blurring conditions. (c) 2024 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Realistic Human Action Recognition With Multimodal Feature Selection and Fusion
    Wu, Qiuxia
    Wang, Zhiyong
    Deng, Feiqi
    Chi, Zheru
    Feng, David Dagan
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2013, 43 (04): : 875 - 885
  • [22] Mobile Signal Modulation Recognition Based on Multimodal Feature Fusion
    Zhuoran Cai
    Yuqian Li
    Qidi Wu
    Mobile Networks and Applications, 2022, 27 : 2469 - 2482
  • [23] New Method using Feature Level Image Fusion and Entropy Component Analysis for Multimodal Human Face Recognition
    Wu, Tao
    Wu, Xiao-Jun
    Liu, Xing
    Luo, Xiao-Qing
    2012 INTERNATIONAL WORKSHOP ON INFORMATION AND ELECTRONICS ENGINEERING, 2012, 29 : 3991 - 3995
  • [24] Alzheimer's disease diagnosis via multimodal feature fusion
    Tu, Yue
    Lin, Shukuan
    Qiao, Jianzhong
    Zhuang, Yilin
    Zhang, Peng
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 148
  • [25] Sentiment Analysis of Social Media via Multimodal Feature Fusion
    Zhang, Kang
    Geng, Yushui
    Zhao, Jing
    Liu, Jianxin
    Li, Wenxiao
    SYMMETRY-BASEL, 2020, 12 (12): : 1 - 14
  • [26] Research on Kernel-Based Feature Fusion Algorithm in Multimodal Recognition
    Xu Xiaona
    Pan Xiuqin
    Zhao Yue
    Pu Qiumei
    ITCS: 2009 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND COMPUTER SCIENCE, PROCEEDINGS, VOL 2, PROCEEDINGS, 2009, : 3 - 6
  • [27] Multimodal Finger Feature Fusion and Recognition Based on Delaunay Triangular Granulation
    Peng, Jinjin
    Li, Yanan
    Li, Ruimei
    Jia, Guimin
    Yang, Jinfeng
    PATTERN RECOGNITION (CCPR 2014), PT II, 2014, 484 : 303 - 310
  • [28] Multimodal Feature Fusion for 3D Shape Recognition and Retrieval
    Bu, Shuhui
    Cheng, Shaoguang
    Liu, Zhenbao
    Han, Junwei
    IEEE MULTIMEDIA, 2014, 21 (04) : 38 - 46
  • [29] MULTIMODAL FEATURE FUSION MODEL FOR RGB-D ACTION RECOGNITION
    Xu Weiyao
    Wu Muqing
    Zhao Min
    Xia Ting
    2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2021,
  • [30] Multimodal face recognition based on images fusion on feature and decision levels
    Liu, Jin
    Zhang, Le-Shi
    Xu, Ke-Xin
    Nami Jishu yu Jingmi Gongcheng/Nanotechnology and Precision Engineering, 2009, 7 (01): : 65 - 70