EEMEFN: Low-Light Image Enhancement via Edge-Enhanced Multi-Exposure Fusion Network

被引:0
|
作者
Zhu, Minfeng [1 ,2 ]
Pan, Pingbo [2 ,3 ]
Chen, Wei [1 ]
Yang, Yi [2 ]
机构
[1] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou, Peoples R China
[2] Univ Technol Sydney, ReLER Lab, Ultimo, Australia
[3] Baidu Res, Melbourne, Vic, Australia
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work focuses on the extremely low-light image enhancement, which aims to improve image brightness and reveal hidden information in darken areas. Recently, image enhancement approaches have yielded impressive progress. However, existing methods still suffer from three main problems: (1) low-light images usually are high-contrast. Existing methods may fail to recover images details in extremely dark or bright areas; (2) current methods cannot precisely correct the color of low-light images; (3) when the object edges are unclear, the pixel-wise loss may treat pixels of different objects equally and produce blurry images. In this paper, we propose a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images. In the first stage, we employ a multi-exposure fusion module to address the high contrast and color bias issues. We synthesize a set of images with different exposure time from a single image and construct an accurate normal-light image by combining well-exposed areas under different illumination conditions. Thus, it can produce realistic initial images with correct color from extremely noisy and low-light images. Secondly, we introduce an edge enhancement module to refine the initial images with the help of the edge information. Therefore, our method can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss. Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.
引用
收藏
页码:13106 / 13113
页数:8
相关论文
共 50 条
  • [1] Learn to enhance the low-light image via a multi-exposure generation and fusion method
    Jin, Haiyan
    Li, Long
    Su, Haonan
    Zhang, Yuanlin
    Xiao, Zhaolin
    Wang, Bin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 100
  • [2] Low-light Image Enhancement Based on Multi-exposure Images Generation
    Guan Y.
    Chen X.
    Tian J.
    Tang Y.
    Jiqiren/Robot, 2023, 45 (04): : 422 - 430
  • [3] Low-light image enhancement via multistage feature fusion network
    Tan, Mingming
    Fan, Jiayi
    Fan, Guodong
    Gan, Min
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [4] Multi-exposure image fusion via deep perceptual enhancement
    Han, Dong
    Li, Liang
    Guo, Xiaojie
    Ma, Jiayi
    INFORMATION FUSION, 2022, 79 : 248 - 262
  • [5] Exposure difference network for low-light image enhancement
    Jiang, Shengqin
    Mei, Yongyue
    Wang, Peng
    Liu, Qingshan
    PATTERN RECOGNITION, 2024, 156
  • [6] Multi-Scale Progressive Fusion Network for Low-Light Image Enhancement
    Zhang, Hongxin
    Ran, Teng
    Xiao, Wendong
    Lv, Kai
    Peng, Song
    Yuan, Liang
    Wang, Jingchuan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [7] Multi-Exposure Image Fusion Using Edge-Aware Network
    Aslam, Ghazala
    Imran, Muhammad
    Haq, Bushra
    Ullah, Anayat
    Elbasi, Ersin
    2022 17TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET'22), 2022, : 59 - 63
  • [8] Single image defogging via multi-exposure image fusion and detail enhancement
    Mao, Wenjing
    Zheng, Dezhi
    Chen, Minze
    Chen, Juqiang
    JOURNAL OF SAFETY SCIENCE AND RESILIENCE, 2024, 5 (01): : 37 - 46
  • [9] EFCANet: Exposure Fusion Cross-Attention Network for Low-Light Image Enhancement
    Yang, Zhe
    Liu, Fangjin
    Li, Jinjiang
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [10] FRN: Fusion and recalibration network for low-light image enhancement
    Kavinder Singh
    Ashutosh Pandey
    Akshat Agarwal
    Mohit Kumar Agarwal
    Aditya Shankar
    Anil Singh Parihar
    Multimedia Tools and Applications, 2024, 83 : 12235 - 12252