GAN-Based Focusing-Enhancement Method for Monochromatic Synthetic Aperture Imaging

被引:5
作者
Ye, Guoyao [1 ]
Zhang, Zixin [1 ]
Ding, Li [1 ,2 ,3 ]
Li, Yinwei [2 ,4 ]
Zhu, Yiming [1 ,2 ,3 ]
机构
[1] Univ Shanghai Sci & Technol, Terahertz Technol Innovat Res Inst, Shanghai 200093, Peoples R China
[2] Univ Shanghai Sci & Technol, Shanghai Key Lab Modern Opt Syst, Shanghai 200093, Peoples R China
[3] Terahertz Sci Cooperat Innovat Ctr, Shanghai 200093, Peoples R China
[4] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Imaging; Generators; Sensors; Apertures; Generative adversarial networks; Gallium nitride; Standards; MMW near field imaging; monochromatic full-focus; SAR; image fusion; GAN-FEM;
D O I
10.1109/JSEN.2020.2996656
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Two-dimensional (2-D) synthetic aperture imaging with a single frequency suffers from limited depth-of-focus (DOF), and leads to the difficulty of focusing volume targets. In this paper,as opposed to using a wide band for 3-D imaging, this out-of-focus problem is examined as a multi-focal imaging issue. To solve the limited DOF problem, we propose a generative adversarial network (GAN) based focusing-enhancement method (GAN-FEM) to fit an unknown out-of-focus kernel for MMW monochromatic synthetic aperture imaging. To determine which type of MMW-images dataset of input can be better suitable for GAN, the grayscale and pseudo-color images dataset are tested respectively to train the neural network. Proof-of-principle experiments are performed at 94 GHz and the results prove that our proposed GAN-FEM can greatly improve the focusing performance for volume targets. The effectiveness of our proposed method confirms the focusing-enhancement capacity of 2-D monochromatic imaging system for 3-D targets, and provides a possible solution to reduce the system complexity for practical 3-D imaging missions.
引用
收藏
页码:11484 / 11489
页数:6
相关论文
共 21 条
[1]  
[Anonymous], 2017, P INT C LEARN REPR
[2]   Robust Multi-Focus Image Fusion Using Edge Model and Multi-Matting [J].
Chen, Yibo ;
Guan, Jingwei ;
Cham, Wai-Kuen .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) :1526-1541
[3]   A high-resolution imaging radar at 580 GHz [J].
Cooper, K. B. ;
Dengler, R. J. ;
Chattopadhyay, G. ;
Schlecht, E. ;
Gill, J. ;
Skalare, A. ;
Mehdi, I. ;
Siegel, P. H. .
IEEE MICROWAVE AND WIRELESS COMPONENTS LETTERS, 2008, 18 (01) :64-66
[4]  
Goodfellow I, 2014, Adv. Neural Inf. Process. Syst., P2672
[5]  
Han B, 2018, ADV NEUR IN, V31
[6]   Perceptual Losses for Real-Time Style Transfer and Super-Resolution [J].
Johnson, Justin ;
Alahi, Alexandre ;
Li Fei-Fei .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :694-711
[7]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[8]   DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks [J].
Kupyn, Orest ;
Budzan, Volodymyr ;
Mykhailych, Mykola ;
Mishkin, Dmytro ;
Matas, Jiri .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8183-8192
[9]   Method of Passive MMW Image Detection and Identification for Close Target [J].
Li, LiangChao ;
Yang, JianYu ;
Cui, GuoLong ;
Jiang, ZhengMao ;
Zheng, Xin .
JOURNAL OF INFRARED MILLIMETER AND TERAHERTZ WAVES, 2011, 32 (01) :102-115
[10]   Plane-Wave Synthesis and RCS Extraction via 3-D Linear Array SAR [J].
Liao, Kefei ;
Zhang, Xiaoling ;
Shi, Jun .
IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, 2015, 14 :994-997