L2FUSION: LOW-LIGHT ORIENTED INFRARED AND VISIBLE IMAGE FUSION

被引:7
作者
Gao, Xiang
Lv, Guohua [1 ]
Dong, Aimei
Wei, Zhonghe
Cheng, Jinyong
机构
[1] Qilu Univ Technol, Shandong Acad Sci, Minist Educ,Natl Supercomp Ctr Jinan,Shandong Com, Key Lab Comp Power Network & Informat Secur, Jinan, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2023年
关键词
Image fusion; low-light; deep learning; infrared image; visible image; NETWORK;
D O I
10.1109/ICIP49359.2023.10223183
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared and visible image fusion aims to integrate salient targets and abundant texture information into a single fused image. Existing methods typically ignore the issue of illumination, so that there are problems of weak texture details and poor visual perception in case of low illumination. To address this issue, we propose a low-light oriented infrared and visible image fusion network, named L2Fusion. In particular, we first design a decomposition network according to Retinex theory to obtain the reflectance features of a visible image with low-light. Then, these features are integrated with the features extracted from the corresponding infrared image by a residual network. The finally fused image largely eliminates the negative impact caused by low illumination, and contains both salient targets and abundant texture information. Extensive experiments demonstrate the superiority of our L2Fusion over the state-of-the-art methods, in terms of both visual effect and quantitative metrics.
引用
收藏
页码:2405 / 2409
页数:5
相关论文
共 22 条
[1]   LLVIP: A Visible-infrared Paired Dataset for Low-light Vision [J].
Jia, Xinyu ;
Zhu, Chuang ;
Li, Minzhen ;
Tang, Wenqi ;
Zhou, Wenli .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, :3489-3497
[2]   RETINEX THEORY OF COLOR-VISION [J].
LAND, EH .
SCIENTIFIC AMERICAN, 1977, 237 (06) :108-&
[3]   RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
INFORMATION FUSION, 2021, 73 :72-86
[4]   DenseFuse: A Fusion Approach to Infrared and Visible Images [J].
Li, Hui ;
Wu, Xiao-Jun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) :2614-2623
[5]   Performance comparison of different multi-resolution transforms for image fusion [J].
Li, Shutao ;
Yang, Bin ;
Hu, Jianwen .
INFORMATION FUSION, 2011, 12 (02) :74-84
[6]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[7]   A general framework for image fusion based on multi-scale transform and sparse representation [J].
Liu, Yu ;
Liu, Shuping ;
Wang, Zengfu .
INFORMATION FUSION, 2015, 24 :147-164
[8]   SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer [J].
Ma, Jiayi ;
Tang, Linfeng ;
Fan, Fan ;
Huang, Jun ;
Mei, Xiaoguang ;
Ma, Yong .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (07) :1200-1217
[9]   FusionGAN: A generative adversarial network for infrared and visible image fusion [J].
Ma, Jiayi ;
Yu, Wei ;
Liang, Pengwei ;
Li, Chang ;
Jiang, Junjun .
INFORMATION FUSION, 2019, 48 :11-26
[10]   Infrared and visible image fusion methods and applications: A survey [J].
Ma, Jiayi ;
Ma, Yong ;
Li, Chang .
INFORMATION FUSION, 2019, 45 :153-178