共 2 条
High-Resolution Refocusing for Defocused ISAR Images by Complex-Valued Pix2pixHD Network
被引:11
|作者:
Yuan, Haoxuan
[1
]
Li, Hongbo
[1
]
Zhang, Yun
[1
]
Wang, Yong
[1
]
Liu, Zitao
[1
]
Wei, Chenxi
[1
]
Yao, Chengxin
[1
]
机构:
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Radar imaging;
Radar;
Imaging;
Image reconstruction;
Frequency modulation;
Feature extraction;
Time-frequency analysis;
Complex-valued (CV) network;
generative adversarial networks (GANs);
inverse synthetic aperture radar (ISAR);
radar image refocusing;
D O I:
10.1109/LGRS.2022.3210036
中图分类号:
P3 [地球物理学];
P59 [地球化学];
学科分类号:
0708 ;
070902 ;
摘要:
Inverse synthetic aperture radar (ISAR) is an effective detection method for targets. However, for the maneuvering targets, the Doppler frequency induced by an arbitrary scatterer on the target is time-varying, which will cause defocus on ISAR images and bring difficulties for the further recognition process. It is hard for traditional methods to well refocus all positions on the target well. In recent years, generative adversarial networks (GANs) achieve great success in image translation. However, the current refocusing models ignore the information of high-order terms containing in the relationship between real and imaginary parts of the data. To this end, an end-to-end refocusing network, named complex-valued pix2pixHD (CVPHD), is proposed to learn the mapping from defocus to focus, which utilizes complex-valued (CV) ISAR images as an input. A CV instance normalization layer is applied to mine the deep relationship between the complex parts by calculating the covariance of them and accelerate the training. Subsequently, an innovative adaptively weighted loss function is put forward to improve the overall refocusing effect. Finally, the proposed CVPHD is tested with the simulated and real dataset, and both can get well-refocused results. The results of comparative experiments show that the refocusing error can be reduced if extending the pix2pixHD network to the CV domain and the performance of CVPHD surpasses other autofocus methods in refocusing effects. The code and dataset have been available online (https://github.com/yhx-hit/CVPHD).
引用
收藏
页数:5
相关论文