Multi-Exposure Image Fusion via Multi-Scale and Context-Aware Feature Learning

被引:11
作者
Liu, Yu [1 ,2 ]
Yang, Zhigang [1 ,2 ]
Cheng, Juan [1 ,2 ]
Chen, Xun [3 ]
机构
[1] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Semantics; Image fusion; Decoding; Transforms; Transformers; Visualization; Auto-encoder; global contextual information; multi-exposure image fusion; multi-scale features; Transformer; QUALITY ASSESSMENT;
D O I
10.1109/LSP.2023.3243767
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this letter, a deep learning (DL)-based multi-exposure image fusion (MEF) method via multi-scale and context-aware feature learning is proposed, aiming to overcome the defects of existing traditional and DL-based methods. The proposed network is based on an auto-encoder architecture. First, an encoder that combines the convolutional network and Transformer is designed to extract multi-scale features and capture the global contextual information. Then, a multi-scale feature interaction (MSFI) module is devised to enrich the scale diversity of extracted features using cross-scale fusion and Atrous spatial pyramid pooling (ASPP). Finally, a decoder with a nest connection architecture is introduced to reconstruct the fused image. Experimental results show that the proposed method outperforms several representative traditional and DL-based MEF methods in terms of both visual quality and objective assessment.
引用
收藏
页码:100 / 104
页数:5
相关论文
共 42 条
[21]   Perceptual Quality Assessment for Multi-Exposure Image Fusion [J].
Ma, Kede ;
Zeng, Kai ;
Wang, Zhou .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :3345-3356
[22]   Underwater Image Color Correction using Exposure-Bracketing Imaging [J].
Nomura, Kohei ;
Sugimura, Daisuke ;
Hamamoto, Takayuki .
IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (06) :893-897
[23]   Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain [J].
Paul, Sujoy ;
Sevcenco, Ioana S. ;
Agathoklis, Panajotis .
JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2016, 25 (10)
[24]   DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [J].
Prabhakar, K. Ram ;
Srikar, V. Sai ;
Babu, R. Venkatesh .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4724-4732
[25]   Deep unsupervised learning based on color un-referenced loss functions for multi-exposure image fusion [J].
Qi, Ying ;
Zhou, Shangbo ;
Zhang, Zihan ;
Luo, Shuyue ;
Lin, Xiaoran ;
Wang, Liping ;
Qiang, Baohua .
INFORMATION FUSION, 2021, 66 :18-39
[26]  
Qu LH, 2022, AAAI CONF ARTIF INTE, P2126
[27]   SuperFusion: A Versatile Image Registration and Fusion Network with Semantic Awareness [J].
Tang, Linfeng ;
Deng, Yuxin ;
Ma, Yong ;
Huang, Jun ;
Ma, Jiayi .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (12) :2121-2137
[28]   A nonlinear correlation measure for multivariable data set [J].
Wang, Q ;
Shen, Y ;
Zhang, JQ .
PHYSICA D-NONLINEAR PHENOMENA, 2005, 200 (3-4) :287-295
[29]   Detail-Enhanced Multi-Scale Exposure Fusion in YUV Color Space [J].
Wang, Qiantong ;
Chen, Weihai ;
Wu, Xingming ;
Li, Zhengguo .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (08) :2418-2429
[30]   PVT v2: Improved baselines with Pyramid Vision Transformer [J].
Wang, Wenhai ;
Xie, Enze ;
Li, Xiang ;
Fan, Deng-Ping ;
Song, Kaitao ;
Liang, Ding ;
Lu, Tong ;
Luo, Ping ;
Shao, Ling .
COMPUTATIONAL VISUAL MEDIA, 2022, 8 (03) :415-424