Multi-Exposure Image Fusion via Multi-Scale and Context-Aware Feature Learning

被引:11
作者
Liu, Yu [1 ,2 ]
Yang, Zhigang [1 ,2 ]
Cheng, Juan [1 ,2 ]
Chen, Xun [3 ]
机构
[1] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Semantics; Image fusion; Decoding; Transforms; Transformers; Visualization; Auto-encoder; global contextual information; multi-exposure image fusion; multi-scale features; Transformer; QUALITY ASSESSMENT;
D O I
10.1109/LSP.2023.3243767
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this letter, a deep learning (DL)-based multi-exposure image fusion (MEF) method via multi-scale and context-aware feature learning is proposed, aiming to overcome the defects of existing traditional and DL-based methods. The proposed network is based on an auto-encoder architecture. First, an encoder that combines the convolutional network and Transformer is designed to extract multi-scale features and capture the global contextual information. Then, a multi-scale feature interaction (MSFI) module is devised to enrich the scale diversity of extracted features using cross-scale fusion and Atrous spatial pyramid pooling (ASPP). Finally, a decoder with a nest connection architecture is introduced to reconstruct the fused image. Experimental results show that the proposed method outperforms several representative traditional and DL-based MEF methods in terms of both visual quality and objective assessment.
引用
收藏
页码:100 / 104
页数:5
相关论文
共 42 条
[31]   Multi-Exposure Decomposition-Fusion Model for High Dynamic Range Image Saliency Detection [J].
Wang, Xu ;
Sun, Zhenhao ;
Zhang, Qiudan ;
Fang, Yuming ;
Ma, Lin ;
Wang, Shiqi ;
Kwong, Sam .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) :4409-4420
[32]   Image quality assessment: From error visibility to structural similarity [J].
Wang, Z ;
Bovik, AC ;
Sheikh, HR ;
Simoncelli, EP .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (04) :600-612
[33]   Multi-Exposure Image Fusion Techniques: A Comprehensive Review [J].
Xu, Fang ;
Liu, Jinghong ;
Song, Yueming ;
Sun, Hui ;
Wang, Xuan .
REMOTE SENSING, 2022, 14 (03)
[34]   U2Fusion: A Unified Unsupervised Image Fusion Network [J].
Xu, Han ;
Ma, Jiayi ;
Jiang, Junjun ;
Guo, Xiaojie ;
Ling, Haibin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :502-518
[35]   MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks [J].
Xu, Han ;
Ma, Jiayi ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :7203-7216
[36]   Multi-Scale Fusion of Two Large-Exposure-Ratio Images [J].
Yang, Yi ;
Cao, Wei ;
Wu, Shiqian ;
Li, Zhengguo .
IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (12) :1885-1889
[37]   Multi-Scale Exposure Fusion Based on Multi-Visual Feature Measurement and Detail Enhancement Representation [J].
Yang, Yong ;
Zhang, Danjie ;
Wan, Weiguo ;
Huang, Shuying .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
[38]   GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks [J].
Yang, Zhiguang ;
Chen, Youping ;
Le, Zhuliang ;
Ma, Yong .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (11) :6133-6145
[39]   Benchmarking and comparing multi-exposure image fusion algorithms [J].
Zhang, Xingchen .
INFORMATION FUSION, 2021, 74 (74) :111-131
[40]   IFCNN: A general image fusion framework based on convolutional neural network [J].
Zhang, Yu ;
Liu, Yu ;
Sun, Peng ;
Yan, Han ;
Zhao, Xiaolin ;
Zhang, Li .
INFORMATION FUSION, 2020, 54 :99-118