Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning

被引:19
作者
Qu, Linhao [1 ]
Liu, Shaolei [1 ]
Wang, Manning [1 ]
Song, Zhijian [1 ]
机构
[1] Fudan Univ, Digital Med Res Ctr, Sch Basic Med Sci, Shanghai 200032, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-exposure image fusion; Fourier transform; Contrastive learning; Deep learning; QUALITY ASSESSMENT; PERFORMANCE; NETWORK;
D O I
10.1016/j.inffus.2022.12.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-exposure image fusion (MEF) is an important technique for generating high dynamic range images. However, most existing MEF studies focus on fusing a moderately over-exposed image and a moderately under-exposed image, and they are not robust in fusing images with extreme and diverse exposure levels. In this paper, we propose a robust MEF framework based on Fourier transform and contrastive learning. Specifically, we develop a Fourier transform-based pixel intensity transfer strategy to synthesize images with diverse exposure levels from normally exposed natural images and train an encoder-decoder network to reconstruct the original natural image. In this way, the encoder and decoder can learn to extract features from images with diverse exposure levels and generate fused images with normal exposure. We propose a contrastive regularization loss to further enhance the capability of the network in recovering normal exposure levels. In addition, we construct an extreme MEF benchmark dataset and a random MEF benchmark dataset for a more comprehensive evaluation of MEF algorithms. We extensively compare our method with fifteen competitive traditional and deep learning-based MEF algorithms on three benchmark datasets, and our method outperforms the other methods in both subjective visual effects and objective evaluation metrics. Our code, datasets and all fused images will be released.
引用
收藏
页码:389 / 403
页数:15
相关论文
共 59 条
[1]  
[Anonymous], 2014, P EUR C COMP VIS
[2]  
Burt P. J., 1993, [1993] Proceedings Fourth International Conference on Computer Vision, P173, DOI 10.1109/ICCV.1993.378222
[3]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[4]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[5]  
Chen T, 2020, PR MACH LEARN RES, V119
[6]   A new automated quality assessment algorithm for image fusion [J].
Chen, Yin ;
Blum, Rick S. .
IMAGE AND VISION COMPUTING, 2009, 27 (10) :1421-1432
[7]  
Deng J., 2009, CVPR, P248
[8]   Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution [J].
Deng, Xin ;
Zhang, Yutong ;
Xu, Mai ;
Gu, Shuhang ;
Duan, Yiping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :3098-3112
[9]   Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion [J].
Deng, Xin ;
Dragotti, Pier Luigi .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) :3333-3348
[10]  
Gonzalez R. C., 2002, DIGITAL IMAGE PROCES