EV-Fusion: A Novel Infrared and Low-Light Color Visible Image Fusion Network Integrating Unsupervised Visible Image Enhancement

被引:28
作者
Zhang, Xin [1 ]
Wang, Xia [1 ]
Yan, Changda [1 ]
Sun, Qiyang [1 ]
机构
[1] Beijing Inst Technol, Key Lab Photoelect Imaging Technol & Syst, Minist Educ China, Beijing 100081, Peoples R China
关键词
Image color analysis; Image fusion; Feature extraction; Image enhancement; Task analysis; Sensor phenomena and characterization; Lighting; infrared and visible image; nighttime environment; visible image enhancement; FRAMEWORK; NEST;
D O I
10.1109/JSEN.2023.3346886
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Infrared and visible image fusion can effectively integrate the advantages of two source images, preserving significant target information and rich texture details. However, most existing fusion methods are only designed for well-illuminated scenes and tend to lose details when encountering low-light scenes because of the poor brightness of visible images. Some methods incorporate a light adjustment module, but they typically focus only on enhancing intensity information and neglect the enhancement of color feature, resulting in unsatisfactory visual effects in the fused images. To address this issue, this article proposes a novel method called EV-fusion, which explores the potential color and detail features in visible images and improve the visual perception of fused images. Specifically, an unsupervised image enhancement module is designed that effectively restores texture, structure, and color information in visible images by several non-reference loss functions. Then, an intensity image fusion module is devised to integrate the enhanced visible image and the infrared image. Moreover, to improve the infrared salient object feature in the fused images, we propose an infrared bilateral-guided salience map embedding into the fusion loss functions. Extensive experiments demonstrate that our method outperforms state-of-the-art (SOTA) infrared visible image fusion methods.
引用
收藏
页码:4920 / 4934
页数:15
相关论文
共 53 条
[1]   A SPATIAL PROCESSOR MODEL FOR OBJECT COLOR-PERCEPTION [J].
BUCHSBAUM, G .
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 1980, 310 (01) :1-26
[2]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[3]   Infrared and visible image fusion based on iterative differential thermal information filter [J].
Chen, Yanling ;
Cheng, Lianglun ;
Wu, Heng ;
Mo, Fei ;
Chen, Ziyang .
OPTICS AND LASERS IN ENGINEERING, 2022, 148
[4]   Region-based multimodal image fusion using ICA bases [J].
Cvejic, Nedeljko ;
Lewis, John ;
Bull, David ;
Canagarajah, Nishan .
2006 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP 2006, PROCEEDINGS, 2006, :1801-+
[5]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[6]   A Dual-branch Network for Infrared and Visible Image Fusion [J].
Fu, Yu ;
Wu, Xiao-Jun .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :10675-10680
[7]   DCDR-GAN: A Densely Connected Disentangled Representation Generative Adversarial Network for Infrared and Visible Image Fusion [J].
Gao, Yuan ;
Ma, Shiwei ;
Liu, Jingjing .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (02) :549-561
[8]   Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [J].
Guo, Chunle ;
Li, Chongyi ;
Guo, Jichang ;
Loy, Chen Change ;
Hou, Junhui ;
Kwong, Sam ;
Cong, Runmin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1777-1786
[9]   VDFEFuse: A novel fusion approach to infrared and visible images [J].
Hao, Shuai ;
He, Tian ;
An, Beiyi ;
Ma, Xu ;
Wen, Hu ;
Wang, Feng .
INFRARED PHYSICS & TECHNOLOGY, 2022, 121
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778