HoLoCo: Holistic and local contrastive learning network for multi-exposure image fusion

被引:91
作者
Liu, Jinyuan [1 ]
Wu, Guanyao [2 ,4 ]
Luan, Junsheng [2 ]
Jiang, Zhiying [2 ,4 ]
Liu, Risheng [3 ,4 ]
Fan, Xin [3 ,4 ]
机构
[1] Dalian Univ Technol, Sch Mech Engn, Dalian 116086, Peoples R China
[2] Dalian Univ Technol, Sch Software Technol, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian 116620, Peoples R China
[4] Key Lab Ubiquitous Network & Serv Software Liaonin, Dalian 116620, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Multi-exposure image; Illumination correction; Adversarial learning; Contrastive learning; QUALITY ASSESSMENT; ENSEMBLE; MODEL;
D O I
10.1016/j.inffus.2023.02.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-exposure image fusion (MEF) targets to integrate multiple shots with different exposures and generates a single higher dynamic image than each. Existing deep learning-based MEF approaches only adopt reference high dynamic images (HDR) as positive samples to guide the training of fusion networks. However, simply relying on these positive samples are difficult to find the optimal parameters for the network as a whole. Thus, the structure or texture information is blurred or missed in the generated HDR results. Moreover, few approaches attempted to prevent illumination degeneration during the fusion process, resulting in poor color saturation on their fused results. To address such limitations, in this paper, we introduce a novel holistic and local constraint built upon contrastive learning, namely HoLoCo, to discover the intrinsic information of both source LDR images and the reference HDR one. In this manner, the generated fused images are pulled to the HDR image and pushed away from the LDR source images in both image-based and path-based latent feature spaces. Besides, inspired by Retinex theory, we propose a color correction module (CCM) to refine illumination features. CCM involves dual streams that can collaborate to ensure the nature of color information and details consistency. Extensive experiments on two datasets show that our HoLoCo can continuously generate visual-appealing HDR results with precise detail and vivid color rendition, performing favorably against the state-of-the-art MEF approaches. Source code is available in github https://github.com/JinyuanLiu-CV/HoLoCo.
引用
收藏
页码:237 / 249
页数:13
相关论文
共 75 条
[1]   Split aperture imaging for high dynamic range [J].
Aggarwal, M ;
Ahuja, N .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 58 (01) :7-17
[2]   ExpoBlend: Information preserving exposure blending based on normalized log-domain entropy [J].
Bruce, Neil D. B. .
COMPUTERS & GRAPHICS-UK, 2014, 39 :12-23
[3]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[4]  
Chen SY, 2020, INT CONF ACOUST SPEE, P1464, DOI [10.1109/icassp40776.2020.9053765, 10.1109/ICASSP40776.2020.9053765]
[5]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[6]   Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution [J].
Deng, Xin ;
Zhang, Yutong ;
Xu, Mai ;
Gu, Shuhang ;
Duan, Yiping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :3098-3112
[7]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[8]   Fusion of multi-exposure images [J].
Goshtasby, AA .
IMAGE AND VISION COMPUTING, 2005, 23 (06) :611-618
[9]   Multi-exposure image fusion via deep perceptual enhancement [J].
Han, Dong ;
Li, Liang ;
Guo, Xiaojie ;
Ma, Jiayi .
INFORMATION FUSION, 2022, 79 :248-262
[10]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135