DIVFusion: Darkness-free infrared and visible image fusion

被引:231
作者
Tang, Linfeng [1 ]
Xiang, Xinyu [1 ]
Zhang, Hao [1 ]
Gong, Meiqi [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Image fusion; Low-light image enhancement; Scene-illumination disentangled; Texture-contrast enhancement; NETWORK; ENHANCEMENT; PERFORMANCE; RETINEX; NEST;
D O I
10.1016/j.inffus.2022.10.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital image enhancement technology, infrared and visible image fusion aims to generate high-quality fused images with salient targets and abundant texture in extreme environments. However, current image fusion methods are all designed for infrared and visible images with normal illumination. In the night scene, existing methods suffer from weak texture details and poor visual perception due to the severe degradation in visible images, which affects subsequent visual applications. To this end, this paper advances a darkness -free infrared and visible image fusion method (DIVFusion), which reasonably lights up the darkness and facilitates complementary information aggregation. Specifically, to improve the fusion quality of nighttime images, which suffer from low illumination, texture concealment, and color distortion, we first design a scene -illumination disentangled network (SIDNet) to strip the illumination degradation in nighttime visible images while preserving informative features of source images. Then, a texture-contrast enhancement fusion network (TCEFNet) is devised to integrate complementary information and enhance the contrast and texture details of fused features. Moreover, a color consistency loss is designed to mitigate color distortion from enhancement and fusion. Finally, we fully consider the intrinsic relationship between low-light image enhancement and image fusion, achieving effective coupling and reciprocity. In this way, the proposed method is able to generate fused images with real color and significant contrast in an end-to-end manner. Extensive experiments demonstrate that DIVFusion is superior to state-of-the-art algorithms in terms of visual quality and quantitative evaluations. Particularly, low-light enhancement and dual-modal fusion provide more effective information to the fused image and boost high-level vision tasks. Our code is publicly available at https://github.com/Xinyu-Xiang/DIVFusion.
引用
收藏
页码:477 / 493
页数:17
相关论文
共 57 条
[1]   A new image quality metric for image fusion: The sum of the correlations of differences [J].
Aslantas, V. ;
Bendes, E. .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) :160-166
[2]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[3]   Multi-Focus Image Fusion Based on Multi-Scale Gradients and Image Matting [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Ma, Jiayi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 :655-667
[4]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[5]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[6]  
Das S, 2000, TRANSPORT RES REC, P40
[7]  
Deshmukh M., 2010, Int. J. Image Process., V4, P484
[8]  
Dosovitskiy Alexey, 2021, INT C LEARN REPR
[9]   Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [J].
Guo, Chunle ;
Li, Chongyi ;
Guo, Jichang ;
Loy, Chen Change ;
Hou, Junhui ;
Kwong, Sam ;
Cong, Runmin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1777-1786
[10]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135