Component recognition of ISAR targets via multimodal feature fusion

被引:0
作者
Li, Chenxuan [1 ]
Zhu, Weigang [2 ]
Qu, Wei [2 ]
Ma, Fanyin [1 ]
Wang, Rundong [1 ]
机构
[1] Space Engn Univ, Grad Sch, Beijing 101400, Peoples R China
[2] Space Engn Univ, Dept Elect & Opt Engn, Beijing 101400, Peoples R China
关键词
Few-shot; Semantic segmentation; Inverse Synthetic Aperture Radar (ISAR); Scattering; Multimodal fusion; NETWORK;
D O I
10.1016/j.cja.2024.06.031
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Inverse Synthetic Aperture Radar (ISAR) images of complex targets have a low Signal- to-Noise Ratio (SNR) and contain fuzzy edges and large differences in scattering intensity, which limits the recognition performance of ISAR systems. Also, data scarcity poses a greater challenge to the accurate recognition of components. To address the issues of component recognition in complex ISAR targets, this paper adopts semantic segmentation and proposes a few-shot semantic segmentation framework fusing multimodal features. The scarcity of available data is mitigated by using a two-branch scattering feature encoding structure. Then, the high-resolution features are obtained by fusing the ISAR image texture features and scattering quantization information of complex- valued echoes, thereby achieving significantly higher structural adaptability. Meanwhile, the scattering trait enhancement module and the statistical quantification module are designed. The edge texture is enhanced based on the scatter quantization property, which alleviates the segmentation challenge of edge blurring under low SNR conditions. The coupling of query/support samples is enhanced through four-dimensional convolution. Additionally, to overcome fusion challenges caused by information differences, multimodal feature fusion is guided by equilibrium comprehension loss. In this way, the performance potential of the fusion framework is fully unleashed, and the decision risk is effectively reduced. Experiments demonstrate the great advantages of the proposed framework in multimodal feature fusion, and it still exhibits great component segmentation capability under low SNR/edge blurring conditions. (c) 2024 Production and hosting by Elsevier Ltd. on behalf of Chinese Society of Aeronautics and Astronautics. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
引用
收藏
页数:18
相关论文
共 50 条
  • [41] EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition
    Choi, Jun-Ho
    Lee, Jong-Seok
    [J]. UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 693 - 698
  • [42] Infrared and Visible Image Fusion via General Feature Embedding From CLIP and DINOv2
    Luo, Yichuang
    Wang, Fang
    Liu, Xiaohu
    [J]. IEEE ACCESS, 2024, 12 : 99362 - 99371
  • [43] Image-based Plant Recognition by Fusion of Multimodal Information
    Liu, Jen-Chang
    Chiang, Chieh-Yu
    Chen, Sam
    [J]. 2016 10TH INTERNATIONAL CONFERENCE ON INNOVATIVE MOBILE AND INTERNET SERVICES IN UBIQUITOUS COMPUTING (IMIS), 2016, : 5 - 11
  • [44] Confidence-based Deep Multimodal Fusion for Activity Recognition
    Choi, Jun-Ho
    Lee, Jong-Seok
    [J]. PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1548 - 1556
  • [45] The sentiment recognition of online users based on DNNs multimodal fusion
    Fan T.
    Wu P.
    Zhu P.
    [J]. Proceedings of the Association for Information Science and Technology, 2019, 56 (01) : 89 - 97
  • [46] RPFNET: COMPLEMENTARY FEATURE FUSION FOR HAND GESTURE RECOGNITION
    Kim, Do Yeon
    Kim, Dae Ha
    Song, Byung Cheol
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 986 - 990
  • [47] MULTIMODAL FUSION VIA A SERIES OF TRANSFERS FOR NOISE REMOVAL
    Son, Chang-Hwan
    Zhang, Xiao-Ping
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 530 - 534
  • [48] Multisource Multimodal Feature Fusion for Small Leak Detection in Gas Pipelines
    Yan, Wendi
    Liu, Wei
    Zhang, Qiao
    Bi, Hongbo
    Jiang, Chunlei
    Liu, Haixu
    Wang, Tao
    Dong, Taiji
    Ye, Xiaohui
    [J]. IEEE SENSORS JOURNAL, 2024, 24 (02) : 1857 - 1865
  • [49] A Geometric Framework for Feature Mappings in Multimodal Fusion of Brain Image Data
    Zhang, Wen
    Mi, Liang
    Thompson, Paul M.
    Wang, Yalin
    [J]. INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2019, 2019, 11492 : 617 - 630
  • [50] TFEFusion: Targeted feature extraction model for multimodal medical image fusion
    Fan, Chao
    Xuan, Zhihui
    Peng, Bincheng
    Zhu, Zhentong
    Zhu, Xinru
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 106