CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement

被引:5
|
作者
Xu, Han [1 ]
Zhang, Hao [2 ]
Yi, Xunpeng [2 ]
Ma, Jiayi [2 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[2] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light image enhancement; Retinex model; Color shift; Image decomposition; NETWORK; PERFORMANCE;
D O I
10.1007/s11263-024-02065-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-light environments introduce various complex degradations into captured images. Retinex-based methods have demonstrated effective enhancement performance by decomposing an image into illumination and reflectance, allowing for selective adjustment and removal of degradations. However, different types of pollutions in reflectance are often treated together. The absence of explicit distinction and definition of various pollution types results in residual pollutions in the results. Typically, the color shift, which is generally spatially invariant, differs from other spatially variant pollution and proves challenging to eliminate with denoising methods. The remaining color shift compromises color constancy both theoretically and in practice. In this paper, we consider different manifestations of degradations and further decompose them. We propose a color-shift aware Retinex model, termed as CRetinex, which decomposes an image into reflectance, color shift, and illumination. Specific networks are designed to remove spatially variant pollution, correct color shift, and adjust illumination separately. Comparative experiments with the state-of-the-art demonstrate the qualitative and quantitative superiority of our approach. Furthermore, extensive experiments on multiple datasets, including real and synthetic images, along with extended validation, confirm the effectiveness of color-shift aware decomposition and the generalization of CRetinex over a wide range of low-light levels.
引用
收藏
页码:3610 / 3632
页数:23
相关论文
共 50 条
  • [1] Low-Light Image Enhancement via Poisson Noise Aware Retinex Model
    Kong, Xiang-Yu
    Liu, Lei
    Qian, Yun-Sheng
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1540 - 1544
  • [2] Fractional structure and texture aware model for image Retinex and low-light enhancement
    Li, Chengxue
    He, Chuanjiang
    APPLIED MATHEMATICAL MODELLING, 2024, 130 : 496 - 513
  • [3] Low-light image enhancement based on Retinex-Net with color restoration
    Feng, Wei
    Wu, Guiming
    Zhou, Shiqi
    Li, Xingang
    APPLIED OPTICS, 2023, 62 (25) : 6577 - 6584
  • [4] Deep parametric Retinex decomposition model for low-light image enhancement
    Li, Xiaofang
    Wang, Weiwei
    Feng, Xiangchu
    Li, Min
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 241
  • [5] A structure and texture revealing retinex model for low-light image enhancement
    Xuesong Li
    Qilei Li
    Marco Anisetti
    Gwanggil Jeon
    Mingliang Gao
    Multimedia Tools and Applications, 2024, 83 : 2323 - 2347
  • [6] Low-light image enhancement based on exponential Retinex variational model
    Chen, Xinyu
    Li, Jinjiang
    Hua, Zhen
    IET IMAGE PROCESSING, 2021, 15 (12) : 3003 - 3019
  • [7] A structure and texture revealing retinex model for low-light image enhancement
    Li, Xuesong
    Li, Qilei
    Anisetti, Marco
    Jeon, Gwanggil
    Gao, Mingliang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (1) : 2323 - 2347
  • [8] Retinex-based Low-Light Image Enhancement
    Luo, Rui
    Feng, Yan
    He, Mingxin
    Zhang, Yuliang
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1429 - 1434
  • [9] A deep Retinex network for underwater low-light image enhancement
    Ji, Kai
    Lei, Weimin
    Zhang, Wei
    MACHINE VISION AND APPLICATIONS, 2023, 34 (06)
  • [10] Toward Robust and Efficient Low-Light Image Enhancement: Progressive Attentive Retinex Architecture Search
    Shang, Xiaoke
    An, Nan
    Zhang, Shaomin
    Ding, Nai
    TSINGHUA SCIENCE AND TECHNOLOGY, 2023, 28 (03): : 580 - 594