Fine-Grained Image Generation Network With Radar Range Profiles Using Cross-Modal Visual Supervision

被引:3
|
作者
Bao, Jiacheng [1 ]
Li, Da [1 ]
Li, Shiyong [1 ]
Zhao, Guoqiang [1 ]
Sun, Houjun [1 ]
Zhang, Yi [1 ]
机构
[1] Beijing Inst Technol, Sch Integrated Circuits & Elect, Beijing Key Lab Millimeter Wave & Terahertz Tech, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal supervision; deep neural network (DNN); electromagnetic imaging; generative adversarial network (GAN); radar range profile; CONVOLUTIONAL NEURAL-NETWORK; ENTROPY; RECONSTRUCTION; RESOLUTION;
D O I
10.1109/TMTT.2023.3299615
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electromagnetic imaging methods mainly utilize converted sampling, dimensional transformation, and coherent processing to obtain spatial images of targets, which often suffer from accuracy and efficiency problems. Deep neural network (DNN)-based high-resolution imaging methods have achieved impressive results in improving resolution and reducing computational costs. However, previous works exploit single modality information from electromagnetic data; thus, the performances are limited. In this article, we propose an electromagnetic image generation network (EMIG-Net), which translates electromagnetic data of multiview 1-D range profiles (1DRPs), directly into bird-view 2-D high-resolution images under cross-modal supervision. We construct an adversarial generative framework with visual images as supervision to significantly improve the imaging accuracy. Moreover, the network structure is carefully designed to optimize computational efficiency. Experiments on self-built synthetic data and experimental data in the anechoic chamber show that our network has the ability to generate high-resolution images, whose visual quality is superior to that of traditional imaging methods and DNN-based methods, while consuming less computational cost. Compared with the backprojection (BP) algorithm, the EMIG-Net gains a significant improvement in entropy (72%), peak signal-to-noise ratio (PSNR; 150%), and structural similarity (SSIM; 153%). Our work shows the broad prospects of deep learning in radar data representation and high-resolution imaging and provides a path for researching electromagnetic imaging based on learning theory.
引用
收藏
页码:1339 / 1352
页数:14
相关论文
共 50 条
  • [41] FINE-GRAINED GESTURE RECOGNITION BASED ON HIGH RESOLUTION RANGE PROFILES OF TERAHERTZ RADAR
    Wang, Liying
    Cui, Zongyong
    Cao, Zongjie
    Xu, Shengping
    Min, Rui
    2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 1470 - 1473
  • [42] Deep Self-Supervised Hashing With Fine-Grained Similarity Mining for Cross-Modal Retrieval
    Han, Lijun
    Wang, Renlin
    Chen, Chunlei
    Zhang, Huihui
    Zhang, Yujie
    Zhang, Wenfeng
    IEEE ACCESS, 2024, 12 : 31756 - 31770
  • [43] Cross-modal recipe retrieval based on unified text encoder with fine-grained contrastive learning
    Zhang, Bolin
    Kyutoku, Haruya
    Doman, Keisuke
    Komamizu, Takahiro
    Ide, Ichiro
    Qian, Jiangbo
    KNOWLEDGE-BASED SYSTEMS, 2024, 305
  • [44] Fine-Grained Image Classification Based on Cross-Attention Network
    Zheng, Zhiwen
    Zhou, Juxiang
    Gan, Jianhou
    Luo, Sen
    Gao, Wei
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2022, 18 (01)
  • [45] Transformer-based statement level vulnerability detection by cross-modal fine-grained features capture
    Tao, Wenxin
    Su, Xiaohong
    Ke, Yekun
    Han, Yi
    Zheng, Yu
    Wei, Hongwei
    KNOWLEDGE-BASED SYSTEMS, 2025, 316
  • [46] DCMA-Net: dual cross-modal attention for fine-grained few-shot recognition
    Zhou, Yan
    Ren, Xiao
    Li, Jianxun
    Yang, Yin
    Zhou, Haibin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 14521 - 14537
  • [47] Context-Aware Visual Policy Network for Fine-Grained Image Captioning
    Zha, Zheng-Jun
    Liu, Daqing
    Zhang, Hanwang
    Zhang, Yongdong
    Wu, Feng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (02) : 710 - 722
  • [48] Fine-Grained Correlation Learning with Stacked Co-attention Networks for Cross-Modal Information Retrieval
    Lu, Yuhang
    Yu, Jing
    Liu, Yanbing
    Tan, Jianlong
    Guo, Li
    Zhang, Weifeng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2018), PT I, 2018, 11061 : 213 - 225
  • [49] DCMA-Net: dual cross-modal attention for fine-grained few-shot recognition
    Yan Zhou
    Xiao Ren
    Jianxun Li
    Yin Yang
    Haibin Zhou
    Multimedia Tools and Applications, 2024, 83 : 14521 - 14537
  • [50] A Fine-Grained Semantic Alignment Method Specific to Aggregate Multi-Scale Information for Cross-Modal Remote Sensing Image Retrieval
    Zheng, Fuzhong
    Wang, Xu
    Wang, Luyao
    Zhang, Xiong
    Zhu, Hongze
    Wang, Long
    Zhang, Haisu
    SENSORS, 2023, 23 (20)