Toward Fast, Flexible, and Robust Low-Light Image Enhancement

被引:424
|
作者
Ma, Long [1 ,3 ]
Ma, Tengyu [1 ]
Liu, Risheng [2 ]
Fan, Xin [2 ]
Luo, Zhongxuan [1 ]
机构
[1] Dalian Univ Technol, Sch Software Technol, Dalian, Peoples R China
[2] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
ILLUMINATION;
D O I
10.1109/CVPR52688.2022.00555
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing low-light image enhancement techniques are mostly not only difficult to deal with both visual quality and computational efficiency but also commonly invalid in unknown complex scenarios. In this paper, we develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios. To be specific, we establish a cascaded illumination learning process with weight sharing to handle this task. Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage, producing the gains that only use the single basic block for inference (yet has not been exploited in previous works), which drastically diminishes computation cost. We then define the unsupervised training loss to elevate the model capability that can adapt general scenes. Further, we make comprehensive explorations to excavate SCI's inherent properties (lacking in existing works) including operation-insensitive adaptability (acquiring stable performance under the settings of different simple operations) and model-irrelevant generality (can be applied to illumination-based existing works to improve performance). Finally, plenty of experiments and ablation studies fully indicate our superiority in both quality and efficiency. Applications on low-light face detection and nighttime semantic segmentation fully reveal the latent practical values for SCI. The source code is available at https://github.com/vis-opt-group/SCI.
引用
收藏
页码:5627 / 5636
页数:10
相关论文
共 50 条
  • [1] Fast and robust low-light image enhancement based on iterative propagation network
    Xiao, Zhibo
    Jiang, Zhilong
    Kong, Yan
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2024, 39 (07) : 971 - 979
  • [2] Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement
    Han, Xianjun
    Bao, Taoli
    Yang, Hongyu
    COMPUTER GRAPHICS FORUM, 2024, 43 (07)
  • [3] Toward Robust and Efficient Low-Light Image Enhancement: Progressive Attentive Retinex Architecture Search
    Shang, Xiaoke
    An, Nan
    Zhang, Shaomin
    Ding, Nai
    TSINGHUA SCIENCE AND TECHNOLOGY, 2023, 28 (03): : 580 - 594
  • [4] Bilevel Fast Scene Adaptation for Low-Light Image Enhancement
    Ma, Long
    Jin, Dian
    An, Nan
    Liu, Jinyuan
    Fan, Xin
    Luo, Zhongxuan
    Liu, Risheng
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023,
  • [5] DRLIE: Flexible Low-Light Image Enhancement via Disentangled Representations
    Tang, Linfeng
    Ma, Jiayi
    Zhang, Hao
    Guo, Xiaojie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2694 - 2707
  • [6] Retinex-Based Fast Algorithm for Low-Light Image Enhancement
    Liu, Shouxin
    Long, Wei
    He, Lei
    Li, Yanyan
    Ding, Wei
    ENTROPY, 2021, 23 (06)
  • [7] Lightweight and Fast Low-Light Image Enhancement Method Based on PoolFormer
    Hu, Xin
    Wang, Jinhua
    Xu, Sunhan
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 157 - 160
  • [8] Low-Light Stereo Image Enhancement
    Huang, Jie
    Fu, Xueyang
    Xiao, Zeyu
    Zhao, Feng
    Xiong, Zhiwei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2978 - 2992
  • [9] Low-Light Hyperspectral Image Enhancement
    Li, Xuelong
    Li, Guanlin
    Zhao, Bin
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] Decoupled Low-Light Image Enhancement
    Hao, Shijie
    Han, Xu
    Guo, Yanrong
    Wang, Meng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (04)