DMEF: Multi-Exposure Image Fusion Based on a Novel Deep Decomposition Method

被引:8
|
作者
Wu, Kangle [1 ]
Chen, Jun [1 ]
Ma, Jiayi [2 ]
机构
[1] China Univ Geosci, Sch Automat, Hubei Key Lab Adv Control & Intelligent Automat C, Engn Res Ctr Intelligent Technol Geoexplorat,Mini, Wuhan 430074, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep decomposition; illumination; reflection; multi-exposure image fusion; retinex theory; QUALITY ASSESSMENT; RETINEX; PERFORMANCE; NETWORK;
D O I
10.1109/TMM.2022.3198327
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel deep decomposition approach based on Retinex theory for multi-exposure image fusion, termed as DMEF. According to the assumption of Retinex theory, we firstly decompose the source images into illumination and reflection maps by the data-driven decomposition network, among which we introduce the pathwise interaction block that reactivates the deep features lost in one path and embeds them into another path. Therefore, loss of illumination and reflection features during decomposition can be effectively suppressed. And then the high dynamic range illumination map could be obtained by fusing the separated illumination maps in the fusion network. Thus, the reconstructed details in under-exposed and over-exposed regions will be clearer with the help of the fused reflection map which contains complete high-frequency scene information. Finally, the fused illumination and reflection maps are multiplied pixel-by-pixel to obtain the final fused image. Moreover, to retain the discontinuity in the illumination map where gradient of reflection map changes steeply, we introduce the structure-preservation smoothness loss function to retain the structure information and eliminate visual artifacts in these regions. The superiority of our proposed network is demonstrated by applying extensive experiments compared with other state-of-the-art fusion methods subjectively and objectively.
引用
收藏
页码:5690 / 5703
页数:14
相关论文
共 50 条
  • [1] Multi-exposure image fusion based on tensor decomposition
    Shengcong Wu
    Ting Luo
    Yang Song
    Haiyong Xu
    Multimedia Tools and Applications, 2020, 79 : 23957 - 23975
  • [2] Multi-exposure image fusion based on tensor decomposition
    Wu, Shengcong
    Luo, Ting
    Song, Yang
    Xu, Haiyong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (33-34) : 23957 - 23975
  • [3] Single Image Defogging Method Based on Image Patch Decomposition and Multi-Exposure Image Fusion
    Liu, Qiuzhuo
    Luo, Yaqin
    Li, Ke
    Li, Wenfeng
    Chai, Yi
    Ding, Hao
    Jiang, Xinghong
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [4] A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure
    Li, Yuanyuan
    Sun, Yanjing
    Zheng, Mingyao
    Huang, Xinghua
    Qi, Guanqiu
    Hu, Hexu
    Zhu, Zhiqin
    ENTROPY, 2018, 20 (12):
  • [5] Multi-Exposure Image Fusion Through Feature Decomposition
    Kim, Jong-Han
    Lee, Kang-Kyu
    Kim, Jong-Ok
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [6] A novel fusion approach of multi-exposure image
    Kong, Jun
    Wang, Rujuan
    Lu, Yingha
    Feng, Xue
    Zhang, Jingbuo
    EUROCON 2007: THE INTERNATIONAL CONFERENCE ON COMPUTER AS A TOOL, VOLS 1-6, 2007, : 1458 - 1464
  • [7] A Method for Fast Multi-Exposure Image Fusion
    Choi, Seungcheol
    Kwon, Oh-Jin
    Lee, Jinhee
    IEEE ACCESS, 2017, 5 : 7371 - 7380
  • [8] A new multi-exposure image fusion method
    Yang, Longpei
    Jiang, Chunhua
    Rao, Yunbo
    Lu, Linlin
    Chen, Ping
    Shao, Jun
    Journal of Computational Information Systems, 2015, 11 (09): : 3245 - 3256
  • [9] GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks
    Yang, Zhiguang
    Chen, Youping
    Le, Zhuliang
    Ma, Yong
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (11): : 6133 - 6145
  • [10] GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks
    Zhiguang Yang
    Youping Chen
    Zhuliang Le
    Yong Ma
    Neural Computing and Applications, 2021, 33 : 6133 - 6145