High Efficient Spatial and Radiation Information Mutual Enhancing Fusion Method for Visible and Infrared Image

被引:1
作者
Liu, Zongzhen [1 ,2 ,3 ,4 ,5 ]
Wei, Yuxing [2 ,3 ,4 ,5 ]
Huang, Geli [1 ]
Li, Chao [1 ]
Zhang, Jianlin [2 ,3 ,4 ,5 ]
Li, Meihui [2 ,3 ,4 ,5 ]
Liu, Dongxu [2 ,3 ,4 ,5 ]
Peng, Xiaoming [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Chinese Acad Sci, Natl Key Lab Opt Field Manipulat Sci & Technol, Chengdu 610209, Peoples R China
[3] Chinese Acad Sci, Key Lab Opt Engn, Chengdu 610209, Peoples R China
[4] Chinese Acad Sci, Inst Opt & Elect, Chengdu 610209, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
关键词
Image fusion; cross-modal; light perception; visible and infrared image; mutual enhancement; NETWORK; ARCHITECTURE; ENHANCEMENT; JOINT; NEST;
D O I
10.1109/ACCESS.2024.3351774
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible and infrared image fusion is an important image enhancement technique that aims to generate high-quality fused images with prominent targets and rich textures in extreme environments. However, most of the current image fusion methods have poor visual perception of the generated fused images due to severe degradation of texture details of the visible light images in scenes with extreme lighting, which seriously affects the application of subsequent advanced vision tasks such as target detection and tracking. To address these challenges, this paper bridges the gap between image fusion and advanced vision tasks by proposing an efficient fusion method for mutual enhancement of spatial and radiometric information. First, we design the gradient residual dense block (LGCnet) to improve the description of fine spatial details in the fusion network. Then, we developed a cross-modal perceptual fusion (CMPF) module to facilitate modal interactions in the network, which effectively enhances the fusion of complementary information between different modalities and reduces redundant learning. Finally, we designed an adaptive light-aware network (ALPnet) to guide the training of the fusion network to facilitate the fusion network to adaptively select more effective information for fusion under different lighting conditions. Extensive experiments show that the proposed fusion approach has competitive advantages over six current state-of-the-art deep-learning methods in highlighting target features and describing the global scene.
引用
收藏
页码:6971 / 6992
页数:22
相关论文
共 62 条
[21]   SMoA: Searching a Modality-Oriented Architecture for Infrared and Visible Image Fusion [J].
Liu, Jinyuan ;
Wu, Yuhui ;
Huang, Zhanbo ;
Liu, Risheng ;
Fan, Xin .
IEEE SIGNAL PROCESSING LETTERS, 2021, 28 :1818-1822
[22]   Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion [J].
Liu, Xingbin ;
Mei, Wenbo ;
Du, Huiqian .
NEUROCOMPUTING, 2017, 235 :131-139
[23]   Region level based multi-focus image fusion using quaternion wavelet and normalized cut [J].
Liu, Yipeng ;
Jin, Jing ;
Wang, Qiang ;
Shen, Yi ;
Dong, Xiaoqiu .
SIGNAL PROCESSING, 2014, 97 :9-30
[24]   Image Fusion With Convolutional Sparse Representation [J].
Liu, Yu ;
Chen, Xun ;
Ward, Rabab K. ;
Wang, Z. Jane .
IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) :1882-1886
[25]   SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer [J].
Ma, Jiayi ;
Tang, Linfeng ;
Fan, Fan ;
Huang, Jun ;
Mei, Xiaoguang ;
Ma, Yong .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (07) :1200-1217
[26]   Infrared and visible image fusion via gradientlet filter [J].
Ma, Jiayi ;
Zhou, Yi .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 197
[27]   DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion [J].
Ma, Jiayi ;
Xu, Han ;
Jiang, Junjun ;
Mei, Xiaoguang ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4980-4995
[28]   Infrared and visible image fusion via detail preserving adversarial learning [J].
Ma, Jiayi ;
Liang, Pengwei ;
Yu, Wei ;
Chen, Chen ;
Guo, Xiaojie ;
Wu, Jia ;
Jiang, Junjun .
INFORMATION FUSION, 2020, 54 :85-98
[29]  
Ma JY, 2019, INT J COMPUT VISION, V127, P512, DOI [10.1007/s11263-018-1117-z, 10.1109/TMAG.2017.2763198]
[30]   FusionGAN: A generative adversarial network for infrared and visible image fusion [J].
Ma, Jiayi ;
Yu, Wei ;
Liang, Pengwei ;
Li, Chang ;
Jiang, Junjun .
INFORMATION FUSION, 2019, 48 :11-26