Multi-Exposure Image Fusion via Multi-Scale and Context-Aware Feature Learning

被引:11
作者
Liu, Yu [1 ,2 ]
Yang, Zhigang [1 ,2 ]
Cheng, Juan [1 ,2 ]
Chen, Xun [3 ]
机构
[1] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Semantics; Image fusion; Decoding; Transforms; Transformers; Visualization; Auto-encoder; global contextual information; multi-exposure image fusion; multi-scale features; Transformer; QUALITY ASSESSMENT;
D O I
10.1109/LSP.2023.3243767
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this letter, a deep learning (DL)-based multi-exposure image fusion (MEF) method via multi-scale and context-aware feature learning is proposed, aiming to overcome the defects of existing traditional and DL-based methods. The proposed network is based on an auto-encoder architecture. First, an encoder that combines the convolutional network and Transformer is designed to extract multi-scale features and capture the global contextual information. Then, a multi-scale feature interaction (MSFI) module is devised to enrich the scale diversity of extracted features using cross-scale fusion and Atrous spatial pyramid pooling (ASPP). Finally, a decoder with a nest connection architecture is introduced to reconstruct the fused image. Experimental results show that the proposed method outperforms several representative traditional and DL-based MEF methods in terms of both visual quality and objective assessment.
引用
收藏
页码:100 / 104
页数:5
相关论文
共 42 条
[1]  
Burt P. J., 1993, [1993] Proceedings Fourth International Conference on Computer Vision, P173, DOI 10.1109/ICCV.1993.378222
[2]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[3]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[4]   Fusion of multi-exposure images [J].
Goshtasby, AA .
IMAGE AND VISION COMPUTING, 2005, 23 (06) :611-618
[5]   Gradient field multi-exposure images fusion for high dynamic range image visualization [J].
Gu, Bo ;
Li, Wujing ;
Wong, Jiangtao ;
Zhu, Minyun ;
Wang, Minghui .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2012, 23 (04) :604-610
[6]   Comments on 'Information measure for performance of image fusion' [J].
Hossny, M. ;
Nahavandi, S. ;
Creighton, D. .
ELECTRONICS LETTERS, 2008, 44 (18) :1066-U28
[7]   DCT-Based HDR Exposure Fusion Using Multiexposed Image Sensors [J].
Lee, Geun-Young ;
Lee, Sung-Hak ;
Kwon, Hyuk-Ju .
JOURNAL OF SENSORS, 2017, 2017
[8]  
Lee SH, 2018, IEEE IMAGE PROC, P1737, DOI 10.1109/ICIP.2018.8451153
[9]   Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion [J].
Li, Hui ;
Ma, Kede ;
Yong, Hongwei ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :5805-5816
[10]  
Li H, 2018, IEEE IMAGE PROC, P1723, DOI 10.1109/ICIP.2018.8451689