DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator

被引:16
作者
Chang, Le [1 ]
Huang, Yongdong [2 ,3 ]
Li, Qiufu [4 ]
Zhang, Yuduo [2 ]
Liu, Lijun [2 ]
Zhou, Qingjian [2 ]
机构
[1] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116600, Peoples R China
[2] Dalian Minzu Univ, Ctr Math & Informat Sci, Dalian 116600, Peoples R China
[3] North Minzu Univ, Inst Image Proc & Understanding, Yinchuan 750021, Peoples R China
[4] Shenzhen Univ, Natl Engn Lab Big Data Syst Comp Technol, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Generative adversarial network; Dual fusion paths; U-type discriminator; Attention block; GENERATIVE ADVERSARIAL NETWORK; MULTISCALE; PERFORMANCE; NEST;
D O I
10.1016/j.neucom.2024.127391
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing infrared and visible image fusion techniques based on generative adversarial networks (GAN) generally disregard local and texture detail features, which tend to limit the fusion performance. Therefore, we propose a GAN model based on dual fusion paths and a U -type discriminator, denoted as DUGAN. Specifically, the image and gradient paths are integrated into the generator to fully extract the content and texture detail features from the source images and their corresponding gradient images. This incorporation aids the generator in generating fusion results with rich information by integrating output features of dual fusion paths. In addition, we construct a U -type discriminator to focus on input images' global and local information, which drives the network to generate fusion results visually consistent with the source images. Furthermore, we integrate attention blocks in the discriminator to improve the representation of salient information. Experimental results demonstrate that DUGAN has better performance in qualitative and quantitative evaluation compared with other state-of-the-art methods. The source code has been released at https://github.com/chang-le-11/DUGAN.
引用
收藏
页数:11
相关论文
共 54 条
[1]   Two-scale image fusion of visible and infrared images using saliency detection [J].
Bavirisetti, Durga Prasad ;
Dhuli, Ravindra .
INFRARED PHYSICS & TECHNOLOGY, 2016, 76 :52-64
[2]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[3]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[4]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[5]   Developing a Spectral-Based Strategy for Urban Object Detection From Airborne Hyperspectral TIR and Visible Data [J].
Eslami, Mehrdad ;
Mohammadzadeh, Ali .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2016, 9 (05) :1808-1816
[6]  
Fu Y, 2022, Arxiv, DOI [arXiv:2107.13967, 10.48550/arXiv.2107.13967]
[7]   Fast saliency-aware multi-modality image fusion [J].
Han, Jungong ;
Pauwels, Eric J. ;
de Zeeuw, Paul .
NEUROCOMPUTING, 2013, 111 :70-80
[8]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[9]  
Huang S., 2023, IEEE Trans. Instrum. Meas., V72, P1
[10]   An infrared and visible image fusion method based on multi-scale transformation and norm optimization [J].
Li, Guofa ;
Lin, Yongjie ;
Qu, Xingda .
INFORMATION FUSION, 2021, 71 :109-129