SaReGAN: a salient regional generative adversarial network for visible and infrared image fusion

被引:5
作者
Gao, Mingliang [1 ]
Zhou, Yi'nan [2 ]
Zhai, Wenzhe [1 ]
Zeng, Shuai [3 ]
Li, Qilei [4 ]
机构
[1] Shandong Univ Technol, Coll Elect & Elect Engn, Zibo 255000, Shandong, Peoples R China
[2] Genesis AI Lab, Futong Technol, Chengdu 610054, Peoples R China
[3] Sichuan Univ, West China Univ Hosp 2, Dept Obstet & Gynaecol, Chengdu, Sichuan, Peoples R China
[4] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
关键词
Smart city; Image fusion; Visible and infrared image; Generative adversarial network; Salient region; PERFORMANCE;
D O I
10.1007/s11042-023-14393-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multispectral image fusion plays a crucial role in smart city environment safety. In the domain of visible and infrared image fusion, object vanishment after fusion is a key problem which restricts the fusion performance. To address this problem, a novel Salient Regional Generative Adversarial Network GAN (SaReGAN) is presented for infrared and VIS image fusion. The SaReGAN consists of three parts. In the first part, the salient regions of infrared image are extracted by visual saliency map and the information of these regions is preserved. In the second part, the VIS image, infrared image and salient information are merged thoroughly in the generator to gain a pre-fused image. In the third part, the discriminator attempts to differentiate the pre-fused image and VIS image, in order to learn details from VIS image based on the adversarial mechanism. Experimental results verify that the SaReGAN outperforms other state-of-the-art methods in quantitative and qualitative evaluations.
引用
收藏
页码:61659 / 61671
页数:13
相关论文
共 31 条
[1]  
Alexander T, 2014, TNO IMAGE FUSION DAT
[2]   A novel medical image fusion method based on Rolling Guidance Filtering [J].
Chen, Jingyue ;
Zhang, Lei ;
Lu, Lu ;
Li, Qilei ;
Hu, Moufa ;
Yang, Xiaomin .
INTERNET OF THINGS, 2021, 14
[3]   Global Contrast Based Salient Region Detection [J].
Cheng, Ming-Ming ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Torr, Philip H. S. ;
Hu, Shi-Min .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (03) :569-582
[4]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[5]   RGB-D-Based Object Recognition Using Multimodal Convolutional Neural Networks: A Survey [J].
Gao, Mingliang ;
Jiang, Jun ;
Zou, Guofeng ;
John, Vijay ;
Liu, Zheng .
IEEE ACCESS, 2019, 7 :43110-43136
[6]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[7]   Multimodal medical image fusion review: Theoretical background and recent advances [J].
Hermessi, Haithem ;
Mourali, Olfa ;
Zagrouba, Ezzeddine .
SIGNAL PROCESSING, 2021, 183
[8]   Image Fusion Techniques: A Survey [J].
Kaur, Harpreet ;
Koundal, Deepika ;
Kadyan, Virender .
ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2021, 28 (07) :4425-4447
[9]  
Li B, 2021, Journal of Computer and Communications, V09, P73, DOI [10.4236/jcc.2021.96005, 10.4236/jcc.2021.96005, DOI 10.4236/JCC.2021.96005]
[10]  
Li H, 2022, Arxiv, DOI arXiv:1804.08992