High Efficient Spatial and Radiation Information Mutual Enhancing Fusion Method for Visible and Infrared Image

被引:1
作者
Liu, Zongzhen [1 ,2 ,3 ,4 ,5 ]
Wei, Yuxing [2 ,3 ,4 ,5 ]
Huang, Geli [1 ]
Li, Chao [1 ]
Zhang, Jianlin [2 ,3 ,4 ,5 ]
Li, Meihui [2 ,3 ,4 ,5 ]
Liu, Dongxu [2 ,3 ,4 ,5 ]
Peng, Xiaoming [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Chinese Acad Sci, Natl Key Lab Opt Field Manipulat Sci & Technol, Chengdu 610209, Peoples R China
[3] Chinese Acad Sci, Key Lab Opt Engn, Chengdu 610209, Peoples R China
[4] Chinese Acad Sci, Inst Opt & Elect, Chengdu 610209, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
关键词
Image fusion; cross-modal; light perception; visible and infrared image; mutual enhancement; NETWORK; ARCHITECTURE; ENHANCEMENT; JOINT; NEST;
D O I
10.1109/ACCESS.2024.3351774
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible and infrared image fusion is an important image enhancement technique that aims to generate high-quality fused images with prominent targets and rich textures in extreme environments. However, most of the current image fusion methods have poor visual perception of the generated fused images due to severe degradation of texture details of the visible light images in scenes with extreme lighting, which seriously affects the application of subsequent advanced vision tasks such as target detection and tracking. To address these challenges, this paper bridges the gap between image fusion and advanced vision tasks by proposing an efficient fusion method for mutual enhancement of spatial and radiometric information. First, we design the gradient residual dense block (LGCnet) to improve the description of fine spatial details in the fusion network. Then, we developed a cross-modal perceptual fusion (CMPF) module to facilitate modal interactions in the network, which effectively enhances the fusion of complementary information between different modalities and reduces redundant learning. Finally, we designed an adaptive light-aware network (ALPnet) to guide the training of the fusion network to facilitate the fusion network to adaptively select more effective information for fusion under different lighting conditions. Extensive experiments show that the proposed fusion approach has competitive advantages over six current state-of-the-art deep-learning methods in highlighting target features and describing the global scene.
引用
收藏
页码:6971 / 6992
页数:22
相关论文
共 62 条
[1]   Pedestrian detection with unsupervised multispectral feature learning using deep neural networks [J].
Cao, Yanpeng ;
Guan, Dayan ;
Huang, Weilin ;
Yang, Jiangxin ;
Cao, Yanlong ;
Qiao, Yu .
INFORMATION FUSION, 2019, 46 :206-217
[2]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[3]   Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception [J].
Chen, Xiaoyu ;
Teng, Zhijie ;
Liu, Yingqi ;
Lu, Jun ;
Bai, Lianfa ;
Han, Jing .
ENTROPY, 2022, 24 (10)
[4]   Region-based multimodal image fusion using ICA bases [J].
Cvejic, Nedeljko ;
Bull, David ;
Canagarajah, Nishan .
IEEE SENSORS JOURNAL, 2007, 7 (5-6) :743-751
[5]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[6]   Infrared and visible images fusion based on RPCA and NSCT [J].
Fu, Zhizhong ;
Wang, Xue ;
Xu, Jin ;
Zhou, Ning ;
Zhao, Yufei .
INFRARED PHYSICS & TECHNOLOGY, 2016, 77 :114-123
[7]   SEDRFuse: A Symmetric Encoder-Decoder With Residual Block Network for Infrared and Visible Image Fusion [J].
Jian, Lihua ;
Yang, Xiaomin ;
Liu, Zheng ;
Jeon, Gwanggil ;
Gao, Mingliang ;
Chisholm, David .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[8]   RETINEX THEORY OF COLOR-VISION [J].
LAND, EH .
SCIENTIFIC AMERICAN, 1977, 237 (06) :108-&
[9]   Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking [J].
Li, Chenglong ;
Zhu, Chengli ;
Huang, Yan ;
Tang, Jin ;
Wang, Liang .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :831-847
[10]   RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
INFORMATION FUSION, 2021, 73 :72-86