Temporally Consistent Enhancement of Low-Light Videos via Spatial-Temporal Compatible Learning

被引:1
|
作者
Zhu, Lingyu [1 ]
Yang, Wenhan [2 ]
Chen, Baoliang [1 ]
Zhu, Hanwei [1 ]
Meng, Xiandong [2 ]
Wang, Shiqi [1 ,3 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] City Univ Hong Kong, Shenzhen Res Inst, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light video enhancement; Temporal consistency; Spatial-temporal compatible learning; QUALITY ASSESSMENT; IMAGE; FRAMEWORK; RETINEX;
D O I
10.1007/s11263-024-02084-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal inconsistency is the annoying artifact that has been commonly introduced in low-light video enhancement, but current methods tend to overlook the significance of utilizing both data-centric clues and model-centric design to tackle this problem. In this context, our work makes a comprehensive exploration from the following three aspects. First, to enrich the scene diversity and motion flexibility, we construct a synthetic diverse low/normal-light paired video dataset with a carefully designed low-light simulation strategy, which can effectively complement existing real captured datasets. Second, for better temporal dependency utilization, we develop a Temporally Consistent Enhancer Network (TCE-Net) that consists of stacked 3D convolutions and 2D convolutions to exploit spatial-temporal clues in videos. Last, the temporal dynamic feature dependencies are exploited to obtain consistency constraints for different frame indexes. All these efforts are powered by a Spatial-Temporal Compatible Learning (STCL) optimization technique, which dynamically constructs specific training loss functions adaptively on different datasets. As such, multiple-frame information can be effectively utilized and different levels of information from the network can be feasibly integrated, thus expanding the synergies on different kinds of data and offering visually better results in terms of illumination distribution, color consistency, texture details, and temporal coherence. Extensive experimental results on various real-world low-light video datasets clearly demonstrate the proposed method achieves superior performance to state-of-the-art methods. Our code and synthesized low-light video database will be publicly available at https://github.com/lingyzhu0101/low-light-video-enhancement.git.
引用
收藏
页码:4703 / 4723
页数:21
相关论文
共 50 条
  • [41] Rethinking Low-Light Enhancement via Transformer-GAN
    Yang, Shaoliang
    Zhou, Dongming
    Cao, Jinde
    Guo, Yanbu
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1082 - 1086
  • [42] Spatio-temporal propagation and reconstruction for low-light video enhancement
    Ye, Jing
    Qiu, Changzhen
    Zhang, Zhiyong
    DIGITAL SIGNAL PROCESSING, 2023, 139
  • [43] Multi-Feature Learning for Low-Light Image Enhancement
    Huang, Wei
    Zhu, Yifeng
    Wang, Rui
    Lu, Xiaofeng
    TWELFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2020), 2020, 11519
  • [44] LEARNING TO FUSE HETEROGENEOUS FEATURES FOR LOW-LIGHT IMAGE ENHANCEMENT
    Tang, Zhenyu
    Ma, Long
    Shang, Xiaoke
    Fan, Xin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2100 - 2104
  • [45] LightingNet: An Integrated Learning Method for Low-Light Image Enhancement
    Yang, Shaoliang
    Zhou, Dongming
    Cao, Jinde
    Guo, Yanbu
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 29 - 42
  • [46] Degrade Is Upgrade: Learning Degradation for Low-Light Image Enhancement
    Jiang, Kui
    Wang, Zhongyuan
    Wang, Zheng
    Chen, Chen
    Yi, Peng
    Lu, Tao
    Lin, Chia-Wen
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1078 - 1086
  • [47] Low-light image enhancement based on deep learning: a survey
    Wang, Yong
    Xie, Wenjie
    Liu, Hongqi
    OPTICAL ENGINEERING, 2022, 61 (04)
  • [48] A RNN for Temporal Consistency in Low-Light Videos Enhanced by Single-Frame Methods
    Rota, Claudio
    Buzzelli, Marco
    Bianco, Simone
    Schettini, Raimondo
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2795 - 2799
  • [49] Low-light image enhancement by two-stream contrastive learning in both spatial and frequency domains
    Huang, Yi
    Tu, Xiaoguang
    Fu, Gui
    Ren, Wanchun
    Liu, Bokai
    Yang, Ming
    Liu, Jianhua
    Zhang, Xiaoqiang
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (04)
  • [50] Learning Distributed Predictive Control via Spatial-Temporal Games
    Wang, Zhuping
    Liu, Peijiang
    Ren, Hongliang
    Zhang, Hao
    Yang, Xindi
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2025, 72 (04) : 4186 - 4195