EEMEFN: Low-Light Image Enhancement via Edge-Enhanced Multi-Exposure Fusion Network

被引:0
|
作者
Zhu, Minfeng [1 ,2 ]
Pan, Pingbo [2 ,3 ]
Chen, Wei [1 ]
Yang, Yi [2 ]
机构
[1] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou, Peoples R China
[2] Univ Technol Sydney, ReLER Lab, Ultimo, Australia
[3] Baidu Res, Melbourne, Vic, Australia
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work focuses on the extremely low-light image enhancement, which aims to improve image brightness and reveal hidden information in darken areas. Recently, image enhancement approaches have yielded impressive progress. However, existing methods still suffer from three main problems: (1) low-light images usually are high-contrast. Existing methods may fail to recover images details in extremely dark or bright areas; (2) current methods cannot precisely correct the color of low-light images; (3) when the object edges are unclear, the pixel-wise loss may treat pixels of different objects equally and produce blurry images. In this paper, we propose a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images. In the first stage, we employ a multi-exposure fusion module to address the high contrast and color bias issues. We synthesize a set of images with different exposure time from a single image and construct an accurate normal-light image by combining well-exposed areas under different illumination conditions. Thus, it can produce realistic initial images with correct color from extremely noisy and low-light images. Secondly, we introduce an edge enhancement module to refine the initial images with the help of the edge information. Therefore, our method can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss. Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.
引用
收藏
页码:13106 / 13113
页数:8
相关论文
共 50 条
  • [41] Low-light image enhancement for infrared and visible image fusion
    Zhou, Yiqiao
    Xie, Lisiqi
    He, Kangjian
    Xu, Dan
    Tao, Dapeng
    Lin, Xu
    IET IMAGE PROCESSING, 2023, 17 (11) : 3216 - 3234
  • [42] EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement
    Li, Canlin
    Song, Shun
    Gao, Pengcheng
    Huang, Wei
    Bi, Lihua
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2024, 18 (04): : 980 - 997
  • [43] Multi-Modular Network-Based Retinex Fusion Approach for Low-Light Image Enhancement
    Wang, Jiarui
    Sun, Yu
    Yang, Jie
    ELECTRONICS, 2024, 13 (11)
  • [44] Attention-Guided Multi-Scale Feature Fusion Network for Low-Light Image Enhancement
    Cui, HengShuai
    Li, Jinjiang
    Hua, Zhen
    Fan, Linwei
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [45] RCFNC: a resolution and contrast fusion network with ConvLSTM for low-light image enhancement
    Canlin Li
    Shun Song
    Xinyue Wang
    Yan Liu
    Lihua Bi
    The Visual Computer, 2024, 40 : 2793 - 2806
  • [46] RCFNC: a resolution and contrast fusion network with ConvLSTM for low-light image enhancement
    Li, Canlin
    Song, Shun
    Wang, Xinyue
    Liu, Yan
    Bi, Lihua
    VISUAL COMPUTER, 2024, 40 (04): : 2793 - 2806
  • [47] LOW-LIGHT IMAGE ENHANCEMENT WITH ATTENTION AND MULTI-LEVEL FEATURE FUSION
    Wang, Lei
    Fu, Guangtao
    Jiang, Zhuqing
    Ju, Guodong
    Men, Aidong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 276 - 281
  • [48] Low-Light Image Enhancement via Pair of Complementary Gamma Functions by Fusion
    Li, Changli
    Tang, Shiqiang
    Yan, Jingwen
    Zhou, Teng
    IEEE ACCESS, 2020, 8 (08): : 169887 - 169896
  • [49] Low-Light Image Enhancement via Cross-Domain Feature Fusion
    Chen, Bin
    Chen, Keyuan
    Wu, Shiqian
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (24)
  • [50] Fusion-Based Low-Light Image Enhancement
    Wang, Haodian
    Wang, Yang
    Cao, Yang
    Zha, Zheng-Jun
    MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 121 - 133