DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator

被引:16
作者
Chang, Le [1 ]
Huang, Yongdong [2 ,3 ]
Li, Qiufu [4 ]
Zhang, Yuduo [2 ]
Liu, Lijun [2 ]
Zhou, Qingjian [2 ]
机构
[1] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116600, Peoples R China
[2] Dalian Minzu Univ, Ctr Math & Informat Sci, Dalian 116600, Peoples R China
[3] North Minzu Univ, Inst Image Proc & Understanding, Yinchuan 750021, Peoples R China
[4] Shenzhen Univ, Natl Engn Lab Big Data Syst Comp Technol, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Generative adversarial network; Dual fusion paths; U-type discriminator; Attention block; GENERATIVE ADVERSARIAL NETWORK; MULTISCALE; PERFORMANCE; NEST;
D O I
10.1016/j.neucom.2024.127391
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing infrared and visible image fusion techniques based on generative adversarial networks (GAN) generally disregard local and texture detail features, which tend to limit the fusion performance. Therefore, we propose a GAN model based on dual fusion paths and a U -type discriminator, denoted as DUGAN. Specifically, the image and gradient paths are integrated into the generator to fully extract the content and texture detail features from the source images and their corresponding gradient images. This incorporation aids the generator in generating fusion results with rich information by integrating output features of dual fusion paths. In addition, we construct a U -type discriminator to focus on input images' global and local information, which drives the network to generate fusion results visually consistent with the source images. Furthermore, we integrate attention blocks in the discriminator to improve the representation of salient information. Experimental results demonstrate that DUGAN has better performance in qualitative and quantitative evaluation compared with other state-of-the-art methods. The source code has been released at https://github.com/chang-le-11/DUGAN.
引用
收藏
页数:11
相关论文
共 54 条
[41]   Fusion method for infrared and visible images by using non-negative sparse representation [J].
Wang, Jun ;
Peng, Jinye ;
Feng, Xiaoyi ;
He, Guiqing ;
Fan, Jianping .
INFRARED PHYSICS & TECHNOLOGY, 2014, 67 :477-489
[42]   U2Fusion: A Unified Unsupervised Image Fusion Network [J].
Xu, Han ;
Ma, Jiayi ;
Jiang, Junjun ;
Guo, Xiaojie ;
Ling, Haibin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :502-518
[43]   Infrared and visible image fusion via parallel scene and texture learning [J].
Xu, Meilong ;
Tang, Linfeng ;
Zhang, Hao ;
Ma, Jiayi .
PATTERN RECOGNITION, 2022, 132
[44]   DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion [J].
Yi, Shi ;
Li, Junjie ;
Yuan, Xuesong .
INFRARED PHYSICS & TECHNOLOGY, 2021, 119
[45]   CSPA-GAN: A Cross-Scale Pyramid Attention GAN for Infrared and Visible Image Fusion [J].
Yin, Haitao ;
Xiao, Jinghu ;
Chen, Hao .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[46]   A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform [J].
Zhang Baohua ;
Lu Xiaoqi ;
Pei Haiquan ;
Zhao Ying .
INFRARED PHYSICS & TECHNOLOGY, 2015, 73 :286-297
[47]   Multifocus image fusion using multiscale transform and convolutional sparse representation [J].
Zhang, Chengfang .
INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (01)
[48]  
Zhang H, 2020, AAAI CONF ARTIF INTE, V34, P12797
[49]   Object fusion tracking based on visible and infrared images: A comprehensive review [J].
Zhang, Xingchen ;
Ye, Ping ;
Leung, Henry ;
Gong, Ke ;
Xiao, Gang .
INFORMATION FUSION, 2020, 63 :166-187
[50]   MHW-GAN: Multidiscriminator Hierarchical Wavelet Generative Adversarial Network for Multimodal Image Fusion [J].
Zhao, Cheng ;
Yang, Peng ;
Zhou, Feng ;
Yue, Guanghui ;
Wang, Shuigen ;
Wu, Huisi ;
Chen, Guoliang ;
Wang, Tianfu ;
Lei, Baiying .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) :13713-13727