Component recognition of ISAR targets via multimodal feature fusion

被引:0
作者
Li, Chenxuan [1 ]
Zhu, Weigang [2 ]
Qu, Wei [2 ]
Ma, Fanyin [1 ]
Wang, Rundong [1 ]
机构
[1] Space Engn Univ, Grad Sch, Beijing 101400, Peoples R China
[2] Space Engn Univ, Dept Elect & Opt Engn, Beijing 101400, Peoples R China
关键词
Few-shot; Semantic segmentation; Inverse Synthetic Aperture Radar (ISAR); Scattering; Multimodal fusion; NETWORK;
D O I
10.1016/j.cja.2024.06.031
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Inverse Synthetic Aperture Radar (ISAR) images of complex targets have a low Signal- to-Noise Ratio (SNR) and contain fuzzy edges and large differences in scattering intensity, which limits the recognition performance of ISAR systems. Also, data scarcity poses a greater challenge to the accurate recognition of components. To address the issues of component recognition in complex ISAR targets, this paper adopts semantic segmentation and proposes a few-shot semantic segmentation framework fusing multimodal features. The scarcity of available data is mitigated by using a two-branch scattering feature encoding structure. Then, the high-resolution features are obtained by fusing the ISAR image texture features and scattering quantization information of complex- valued echoes, thereby achieving significantly higher structural adaptability. Meanwhile, the scattering trait enhancement module and the statistical quantification module are designed. The edge texture is enhanced based on the scatter quantization property, which alleviates the segmentation challenge of edge blurring under low SNR conditions. The coupling of query/support samples is enhanced through four-dimensional convolution. Additionally, to overcome fusion challenges caused by information differences, multimodal feature fusion is guided by equilibrium comprehension loss. In this way, the performance potential of the fusion framework is fully unleashed, and the decision risk is effectively reduced. Experiments demonstrate the great advantages of the proposed framework in multimodal feature fusion, and it still exhibits great component segmentation capability under low SNR/edge blurring conditions. (c) 2024 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Learning Sparse and Discriminative Multimodal Feature Codes for Finger Recognition
    Li, Shuyi
    Zhang, Bob
    Fei, Lunke
    Zhao, Shuping
    Zhou, Yicong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 805 - 815
  • [32] VIEMF: Multimodal metaphor detection via visual information enhancement with multimodal fusion
    He, Xiaoyu
    Yu, Long
    Tian, Shengwei
    Yang, Qimeng
    Long, Jun
    Wang, Bo
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [33] Multimodal Local-Global Ranking Fusion for Emotion Recognition
    Liang, Paul Pu
    Zadeh, Amir
    Morency, Louis-Philippe
    ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 472 - 476
  • [34] Topics Guided Multimodal Fusion Network for Conversational Emotion Recognition
    Yuan, Peicong
    Cai, Guoyong
    Chen, Ming
    Tang, Xiaolv
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 250 - 262
  • [35] Research on the Recognition of Psychological Emotions in Adults Using Multimodal Fusion
    Zhao H.
    Informatica (Slovenia), 2024, 48 (09): : 155 - 162
  • [36] Multimodal Named Entity Recognition with Bottleneck Fusion and Contrastive Learning
    Wang, Peng
    Chen, Xiaohang
    Shang, Ziyu
    Ke, Wenjun
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (04) : 545 - 555
  • [37] Driver Emotion Recognition With a Hybrid Attentional Multimodal Fusion Framework
    Mou, Luntian
    Zhao, Yiyuan
    Zhou, Chao
    Nakisa, Bahareh
    Rastgoo, Mohammad Naim
    Ma, Lei
    Huang, Tiejun
    Yin, Baocai
    Jain, Ramesh
    Gao, Wen
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2970 - 2981
  • [38] Tensor Correlation Fusion for Multimodal Physiological Signal Emotion Recognition
    Shen, Jian
    Zhu, Kexin
    Liu, Huakang
    Wu, Jinwen
    Wang, Kang
    Dong, Qunxi
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, : 7299 - 7308
  • [39] Multimodal Fusion based on Information Gain for Emotion Recognition in the Wild
    Ghaleb, Esam
    Popa, Mirela
    Hortal, Enrique
    Asteriadis, Stylianos
    PROCEEDINGS OF THE 2017 INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS), 2017, : 814 - 823
  • [40] Dynamic Alignment and Fusion of Multimodal Physiological Patterns for Stress Recognition
    Zhang, Xiaowei
    Wei, Xiangyu
    Zhou, Zhongyi
    Zhao, Qiqi
    Zhang, Sipo
    Yang, Yikun
    Li, Rui
    Hu, Bin
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (02) : 685 - 696