PeaceGAN: A GAN-Based Multi-Task Learning Method for SAR Target Image Generation with a Pose Estimator and an Auxiliary Classifier

被引:17
作者
Oh, Jihyong [1 ]
Kim, Munchurl [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon 34141, South Korea
关键词
synthetic aperture radar; automatic target recognition; pose angle estimation; deep learning; convolutional neural networks; multi-task learning; generative adversarial networks; ADVERSARIAL NETWORKS; RECOGNITION;
D O I
10.3390/rs13193939
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Although generative adversarial networks (GANs) are successfully applied to diverse fields, training GANs on synthetic aperture radar (SAR) data is a challenging task due to speckle noise. On the one hand, in a learning perspective of human perception, it is natural to learn a task by using information from multiple sources. However, in the previous GAN works on SAR image generation, information on target classes has only been used. Due to the backscattering characteristics of SAR signals, the structures of SAR images are strongly dependent on their pose angles. Nevertheless, the pose angle information has not been incorporated into GAN models for SAR images. In this paper, we propose a novel GAN-based multi-task learning (MTL) method for SAR target image generation, called PeaceGAN, that has two additional structures, a pose estimator and an auxiliary classifier, at the side of its discriminator in order to effectively combine the pose and class information via MTL. Extensive experiments showed that the proposed MTL framework can help the PeaceGAN's generator effectively learn the distributions of SAR images so that it can better generate the SAR target images more faithfully at intended pose angles for desired target classes in comparison with the recent state-of-the-art methods.
引用
收藏
页数:25
相关论文
共 74 条
  • [1] Abu-Mostafa Y. S., 1990, Journal of Complexity, V6, P192, DOI 10.1016/0885-064X(90)90006-Y
  • [2] New SAR target recognition based on YOLO and very deep multi-canonical correlation analysis
    Amrani, Moussa
    Bey, Abdelatif
    Amamra, Abdenour
    [J]. INTERNATIONAL JOURNAL OF REMOTE SENSING, 2022, 43 (15-16) : 5800 - 5819
  • [3] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [4] Bochkovskiy A., 2020, PREPRINT, DOI DOI 10.48550/ARXIV.2004.10934
  • [5] pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
    Chan, Eric R.
    Monteiro, Marco
    Kellnhofer, Petr
    Wu, Jiajun
    Wetzstein, Gordon
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5795 - 5805
  • [6] Chen SZ, 2014, 2014 INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), P541, DOI 10.1109/DSAA.2014.7058124
  • [7] Target Classification Using the Deep Convolutional Networks for SAR Images
    Chen, Sizhe
    Wang, Haipeng
    Xu, Feng
    Jin, Ya-Qiu
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2016, 54 (08): : 4806 - 4817
  • [8] Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer
    Chen, Xinyuan
    Xu, Chang
    Yang, Xiaokang
    Song, Li
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 546 - 560
  • [9] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
    Choi, Yunjey
    Choi, Minje
    Kim, Munyoung
    Ha, Jung-Woo
    Kim, Sunghun
    Choo, Jaegul
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8789 - 8797
  • [10] NL-SAR: A Unified Nonlocal Framework for Resolution-Preserving (Pol)(In)SAR Denoising
    Deledalle, Charles-Alban
    Denis, Loic
    Tupin, Florence
    Reigber, Andreas
    Jaeger, Marc
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (04): : 2021 - 2038