CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement

被引:5
|
作者
Xu, Han [1 ]
Zhang, Hao [2 ]
Yi, Xunpeng [2 ]
Ma, Jiayi [2 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[2] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light image enhancement; Retinex model; Color shift; Image decomposition; NETWORK; PERFORMANCE;
D O I
10.1007/s11263-024-02065-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-light environments introduce various complex degradations into captured images. Retinex-based methods have demonstrated effective enhancement performance by decomposing an image into illumination and reflectance, allowing for selective adjustment and removal of degradations. However, different types of pollutions in reflectance are often treated together. The absence of explicit distinction and definition of various pollution types results in residual pollutions in the results. Typically, the color shift, which is generally spatially invariant, differs from other spatially variant pollution and proves challenging to eliminate with denoising methods. The remaining color shift compromises color constancy both theoretically and in practice. In this paper, we consider different manifestations of degradations and further decompose them. We propose a color-shift aware Retinex model, termed as CRetinex, which decomposes an image into reflectance, color shift, and illumination. Specific networks are designed to remove spatially variant pollution, correct color shift, and adjust illumination separately. Comparative experiments with the state-of-the-art demonstrate the qualitative and quantitative superiority of our approach. Furthermore, extensive experiments on multiple datasets, including real and synthetic images, along with extended validation, confirm the effectiveness of color-shift aware decomposition and the generalization of CRetinex over a wide range of low-light levels.
引用
收藏
页码:3610 / 3632
页数:23
相关论文
共 50 条
  • [31] Learning shrinkage fields for low-light image enhancement via Retinex
    Wu Q.
    Wang R.
    Ren W.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2020, 46 (09): : 1711 - 1720
  • [32] Optimization algorithm for low-light image enhancement based on Retinex theory
    Yang, Jie
    Wang, Jun
    Dong, LinLu
    Chen, ShuYuan
    Wu, Hao
    Zhong, YaWen
    IET IMAGE PROCESSING, 2023, 17 (02) : 505 - 517
  • [33] Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement
    Al-Hashim, Mohammad Abid
    Al-Ameen, Zohair
    TRAITEMENT DU SIGNAL, 2020, 37 (05) : 733 - 743
  • [34] Retinex-Based Fast Algorithm for Low-Light Image Enhancement
    Liu, Shouxin
    Long, Wei
    He, Lei
    Li, Yanyan
    Ding, Wei
    ENTROPY, 2021, 23 (06)
  • [35] Retinex low-light image enhancement network based on attention mechanism
    Chen, Xinyu
    Li, Jinjiang
    Hua, Zhen
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (03) : 4235 - 4255
  • [36] Retinex low-light image enhancement network based on attention mechanism
    Xinyu Chen
    Jinjiang Li
    Zhen Hua
    Multimedia Tools and Applications, 2023, 82 : 4235 - 4255
  • [37] A Retinex-based network for image enhancement in low-light environments
    Wu, Ji
    Ding, Bing
    Zhang, Beining
    Ding, Jie
    PLOS ONE, 2024, 19 (05):
  • [38] Polarization-Aware Low-Light Image Enhancement
    Zhou, Chu
    Teng, Minggui
    Lyu, Youwei
    Li, Si
    Xu, Chao
    Shi, Boxin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3742 - 3750
  • [39] SNR-Aware Low-light Image Enhancement
    Xu, Xiaogang
    Wang, Ruixing
    Fu, Chi-Wing
    Jia, Jiaya
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 17693 - 17703
  • [40] Low-Light Image Enhancement via Weighted Low-Rank Tensor Regularized Retinex Model
    Yang, Weipeng
    Gao, Hongxia
    Zou, Wenbin
    Liu, Tongtong
    Huang, Shasha
    Ma, Jianliang
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 767 - 775