PCSGAN: Perceptual cyclic-synthesized generative adversarial networks for thermal and NIR to visible image transformation

被引:22
作者
Babu, Kancharagunta Kishan [1 ]
Dubey, Shiv Ram [1 ]
机构
[1] Indian Inst Informat Technol, Comp Vis Grp, Chittoor 517646, Andhra Pradesh, India
关键词
Image transformation; Thermal; Near-InfraRed; Perceptual loss; Adversarial loss; Cyclic-synthesized loss; Generative adversarial network; TRANSLATION;
D O I
10.1016/j.neucom.2020.06.104
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In many real world scenarios, it is difficult to capture the images in the visible light spectrum (VIS) due to bad lighting conditions. However, the images can be captured in such scenarios using Near-Infrared (NIR) and Thermal (THM) cameras. The NIR and THM images contain the limited details. Thus, there is a need to transform the images from THM/NIR to VIS for better understanding. However, it is non-trivial task due to the large domain discrepancies and lack of abundant datasets. Nowadays, Generative Adversarial Network (GAN) is able to transform the images from one domain to another domain. Most of the available GAN based methods use the combination of the adversarial and the pixel-wise losses (like L-1 or L-2) as the objective function for training. The quality of transformed images in case of THM/NIR to VIS transformation is still not up to the mark using such objective function. Thus, better objective functions are needed to improve the quality, fine details and realism of the transformed images. A new model for THM/NIR to VIS image transformation called Perceptual Cyclic-Synthesized Generative Adversarial Network (PCSGAN) is introduced to address these issues. The PCSGAN uses the combination of the perceptual (i.e., feature based) losses along with the pixel-wise and the adversarial losses. Both the quantitative and qualitative measures are used to judge the performance of the PCSGAN model over the WHU-IIP face and the RGB-NIR scene datasets. The proposed PCSGAN outperforms the state-of-the-art image transformation models, including Pix2pix, DuaIGAN, CycIeGAN, PS2GAN, and PAN in terms of the SSIM, MSE, PSNR and LPIPS evaluation measures. The code is available at https://github.com/KishanKancharagunt a/PCSGAN. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:41 / 50
页数:10
相关论文
共 29 条
[1]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
[2]  
Armanious Karim, 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Proceedings, P3267, DOI 10.1109/ICASSP.2019.8682677
[3]  
Babu K.K., ARXIV200105489
[4]  
Chen C., ARXIV190300963
[5]  
Chen JX, 2016, INT CONF SIGN PROCES, P663, DOI 10.1109/ICSP.2016.7877915
[6]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[7]  
Guo TT, 2016, IEEE GLOB CONF SIG, P237, DOI 10.1109/GlobalSIP.2016.7905839
[8]   Heterogeneous Face Recognition: Recent Advances in Infrared-to-Visible Matching [J].
Hu, Shuowen ;
Short, Nathaniel ;
Riggan, Benjamin S. ;
Chasse, Matthew ;
Sarfraz, M. Saquib .
2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, :883-890
[9]   Image-to-Image Translation with Conditional Adversarial Networks [J].
Isola, Phillip ;
Zhu, Jun-Yan ;
Zhou, Tinghui ;
Efros, Alexei A. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5967-5976
[10]   Saliency detection via conditional adversarial image-to-image network [J].
Ji, Yuzhu ;
Zhang, Haijun ;
Wu, Q. M. Jonathan .
NEUROCOMPUTING, 2018, 316 :357-368