Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion

被引:1
作者
Huang, Yuhui [1 ]
Zhou, Shangbo [1 ]
Xu, Yufen [1 ]
Chen, Yijia [1 ]
Cao, Kai [1 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-exposure fusion; image processing; computational photography; convolutional neural networks; QUALITY ASSESSMENT; PERFORMANCE; INFORMATION;
D O I
10.3390/e26020139
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Multi-exposure image fusion (MEF) is a computational approach that amalgamates multiple images, each captured at varying exposure levels, into a singular, high-quality image that faithfully encapsulates the visual information from all the contributing images. Deep learning-based MEF methodologies often confront obstacles due to the inherent inflexibilities of neural network structures, presenting difficulties in dynamically handling an unpredictable amount of exposure inputs. In response to this challenge, we introduce Ref-MEF, a method for color image multi-exposure fusion guided by a reference image designed to deal with an uncertain amount of inputs. We establish a reference-guided exposure correction (REC) module based on channel attention and spatial attention, which can correct input features and enhance pre-extraction features. The exposure-guided feature fusion (EGFF) module combines original image information and uses Gaussian filter weights for feature fusion while keeping the feature dimensions constant. The image reconstruction is completed through a gated context aggregation network (GCAN) and global residual learning GRL. Our refined loss function incorporates gradient fidelity, producing high dynamic range images that are rich in detail and demonstrate superior visual quality. In evaluation metrics focused on image features, our method exhibits significant superiority and leads in holistic assessments as well. It is worth emphasizing that as the number of input images increases, our algorithm exhibits notable computational efficiency.
引用
收藏
页数:21
相关论文
共 51 条
[1]   Image fusion of visible and thermal images for fruit detection [J].
Bulanon, D. M. ;
Burks, T. F. ;
Alchanatis, V. .
BIOSYSTEMS ENGINEERING, 2009, 103 (01) :12-22
[2]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[3]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[4]  
Chen SY, 2020, INT CONF ACOUST SPEE, P1464, DOI [10.1109/icassp40776.2020.9053765, 10.1109/ICASSP40776.2020.9053765]
[5]   A new automated quality assessment algorithm for image fusion [J].
Chen, Yin ;
Blum, Rick S. .
IMAGE AND VISION COMPUTING, 2009, 27 (10) :1421-1432
[6]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[7]   Image fusion metric based on mutual information and Tsallis entropy [J].
Cvejic, N. ;
Canagarajah, C. N. ;
Bull, D. R. .
ELECTRONICS LETTERS, 2006, 42 (11) :626-627
[8]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[9]   Effective Use of Dilated Convolutions for Segmenting Small Object Instances in Remote Sensing Imagery [J].
Hamaguchi, Ryuhei ;
Fujita, Aito ;
Nemoto, Keisuke ;
Imaizumi, Tomoyuki ;
Hikosaka, Shuhei .
2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, :1442-1450
[10]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135