Meta-Learning Based Domain Prior With Application to Optical-ISAR Image Translation

被引:3
作者
Liao, Huaizhang [1 ]
Xia, Jingyuan [1 ]
Yang, Zhixiong [1 ]
Pan, Fulin [1 ]
Liu, Zhen [1 ]
Liu, Yongxiang [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Engn, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Optical imaging; Optical scattering; Metalearning; Feature extraction; Task analysis; Scattering; Image translation; meta-learning; generative model; ISAR image processing;
D O I
10.1109/TCSVT.2023.3318401
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper focuses on generating Inverse Synthetic Aperture Radar (ISAR) images from optical images, in particular, for orbit space targets. ISAR images are widely applied in space target observation and classification tasks, whereas, limited to the expensive cost of ISAR sample collection, training deep learning-based ISAR image classifiers with insufficient samples and generating ISAR samples from emulation optical images via image translation techniques have attracted increasing attention. Image translation has highlighted significant success and popularity in computer vision, remote sensing and data generation societies. However, most of the existing methods are implemented under the discipline of extracting the explicit pixel-level features and do not perform effectively while entailing translation to domains with specific implicit features, such as ISAR image does. We propose a meta-learning based domain prior to implicit feature modelling and apply it to CycleGAN and UNIT models to realize effective translations between the ISAR and optical domains. Two representative implicit features, ISAR scattering distribution feature from the physical domain and the classification identifying feature from the task domain, are elaborately formulated with explicit modelling in statistic form. A meta-learning based training scheme is introduced to leverage the mutual knowledge of domain priors across different samples, and thus allows few-shot learning capacity with dramatically reduced training samples. Extensive simulations validate that the obtained ISAR images have better visible-authenticity and training-effectiveness than the existing image translation approaches on various synthetic datasets. Source codes are available at https://github.com/XYLGroup/MLDP.
引用
收藏
页码:7041 / 7056
页数:16
相关论文
共 63 条
  • [1] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [2] Barratt S, 2018, Arxiv, DOI arXiv:1801.01973
  • [3] CycleGAN-STF: Spatiotemporal Fusion via CycleGAN-Based Image Generation
    Chen, Jia
    Wang, Lizhe
    Feng, Ruyi
    Liu, Peng
    Han, Wei
    Chen, Xiaodao
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (07): : 5851 - 5865
  • [4] Multiagent Meta-Reinforcement Learning for Adaptive Multipath Routing Optimization
    Chen, Long
    Hu, Bin
    Guan, Zhi-Hong
    Zhao, Lian
    Shen, Xuemin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5374 - 5386
  • [5] Choi Y, 2020, PROC CVPR IEEE, P8185, DOI 10.1109/CVPR42600.2020.00821
  • [6] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
    Choi, Yunjey
    Choi, Minje
    Kim, Munyoung
    Ha, Jung-Woo
    Kim, Sunghun
    Choo, Jaegul
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8789 - 8797
  • [7] Effectively Unbiased FID and Inception Score and where to find them
    Chong, Min Jin
    Forsyth, David
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6069 - 6078
  • [8] Global Receptive-Based Neural Network for Target Recognition in SAR Images
    Dong, Ganggang
    Liu, Hongwei
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (04) : 1954 - 1967
  • [9] Soil Moisture Estimation Using Sentinel-1/-2 Imagery Coupled With CycleGAN for Time-Series Gap Filing
    Efremova, Natalia
    Seddik, Mohamed El Amine
    Erten, Esra
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] Finn C, 2018, Arxiv, DOI arXiv:1710.11622