Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement

被引:0
|
作者
Zhu, Lingyu [1 ]
Yang, Wenhan [2 ]
Chen, Baoliang [1 ]
Zhu, Hanwei [1 ]
Ni, Zhangkai [3 ]
Mao, Qi [4 ]
Wang, Shiqi [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] PengCheng Lab, Shenzhen, Peoples R China
[3] Tongji Univ, Shanghai, Peoples R China
[4] Commun Univ China, Beijing, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Low-light Video Enhancement; Unpair Dataset Training; Optimization Learning; IMAGE QUALITY ASSESSMENT; ALGORITHM; RETINEX;
D O I
10.1007/978-3-031-73337-6_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Obtaining pairs of low/normal-light videos, with motions, is more challenging than still images, which raises technical issues and poses the technical route of unpaired learning as a critical role. This paper makes endeavors in the direction of learning for low-light video enhancement without using paired ground truth. Compared to low-light image enhancement, enhancing low-light videos is more difficult due to the intertwined effects of noise, exposure, and contrast in the spatial domain, jointly with the need for temporal coherence. To address the above challenge, we propose the Unrolled Decomposed Unpaired Network (UDU-Net) for enhancing low-light videos by unrolling the optimization functions into a deep network to decompose the signal into spatial and temporal-related factors, which are updated iteratively. Firstly, we formulate low-light video enhancement as a Maximum A Posteriori estimation (MAP) problem with carefully designed spatial and temporal visual regularization. Then, via unrolling the problem, the optimization of the spatial and temporal constraints can be decomposed into different steps and updated in a stage-wise manner. From the spatial perspective, the designed Intra subnet leverages unpair prior information from expert photography retouched skills to adjust the statistical distribution. Additionally, we introduce a novel mechanism that integrates human perception feedback to guide network optimization, suppressing over/under-exposure conditions. Meanwhile, to address the issue from the temporal perspective, the designed Inter subnet fully exploits temporal cues in progressive optimization, which helps achieve improved temporal consistency in enhancement results. Consequently, the proposed method achieves superior performance to state-of-the-art methods in video illumination, noise suppression, and temporal consistency across outdoor and indoor scenes. Our code is available at https://github.com/lingyzhu0101/UDU.git
引用
收藏
页码:329 / 347
页数:19
相关论文
共 50 条
  • [21] Light-Aware Contrastive Learning for Low-Light Image Enhancement
    Wu, Xu
    Lai, Zhihui
    Zhou, Jie
    Hou, Xianxu
    Pedrycz, Witold
    Shen, Linlin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (09)
  • [22] Spatio-temporal propagation and reconstruction for low-light video enhancement
    Ye, Jing
    Qiu, Changzhen
    Zhang, Zhiyong
    DIGITAL SIGNAL PROCESSING, 2023, 139
  • [23] SALVE: Self-Supervised Adaptive Low-Light Video Enhancement
    Azizi, Zohreh
    Kuo, C. -C. Jay
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2023, 12 (04)
  • [24] Temporal-Spatial Filtering for Enhancement of Low-Light Surveillance Video
    Guo, Fan
    Tang, Jin
    Peng, Hui
    Zou, Beiji
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2016, 20 (04) : 652 - 661
  • [25] Adaptive Locally-Aligned Transformer for low-light video enhancement
    Cao, Yiwen
    Su, Yukun
    Deng, Jingliang
    Zhang, Yu
    Wu, Qingyao
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 240
  • [26] Dancing in the Dark: A Benchmark towards General Low-light Video Enhancement
    Fu, Huiyuan
    Zheng, Wenkai
    Wang, Xicong
    Wang, Jiaxuan
    Zhang, Heng
    Ma, Huadong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12831 - 12840
  • [27] Multi-Feature Learning for Low-Light Image Enhancement
    Huang, Wei
    Zhu, Yifeng
    Wang, Rui
    Lu, Xiaofeng
    TWELFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2020), 2020, 11519
  • [28] LEARNING TO FUSE HETEROGENEOUS FEATURES FOR LOW-LIGHT IMAGE ENHANCEMENT
    Tang, Zhenyu
    Ma, Long
    Shang, Xiaoke
    Fan, Xin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2100 - 2104
  • [29] LightingNet: An Integrated Learning Method for Low-Light Image Enhancement
    Yang, Shaoliang
    Zhou, Dongming
    Cao, Jinde
    Guo, Yanbu
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 : 29 - 42
  • [30] Degrade Is Upgrade: Learning Degradation for Low-Light Image Enhancement
    Jiang, Kui
    Wang, Zhongyuan
    Wang, Zheng
    Chen, Chen
    Yi, Peng
    Lu, Tao
    Lin, Chia-Wen
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1078 - 1086