Low-light image enhancement with knowledge distillation

被引:31
作者
Li, Ziwen [1 ]
Wang, Yuehuan [1 ]
Zhang, Jinpu [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Natl Key Lab Sci & Technol Multispectral Informat, Wuhan 430074, Peoples R China
关键词
Low light image enhancement; Knowledge distillation; Deep learning; RETINEX;
D O I
10.1016/j.neucom.2022.10.083
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-light image enhancement studies how to improve the quality of images captured under poor light-ing conditions, which is of real-world importance. Currently, convolutional neural network (CNN)-based methods with state-of-the-art performance have become the mainstream of research. However, most CNN-based methods improve the performance of the algorithm by increasing the width and depth of the neural network, which requires large computing device resources. In this paper, we propose a knowledge distillation method for low light image enhancement. The pro-posed method uses a teacher-student framework in which the teacher network tries to transfer the rich knowledge to the student network. The student network learns the knowledge of image enhancement under the supervision of ground truth images and under the guidance of the teacher network simultane-ously. Knowledge transfer between the teacher-student network is accomplished by distillation loss based on attention maps. We designed a gradient-guided low-light image enhancement network that can be divided into an enhancement branch and a gradient branch, where the enhancement branch is learned under the guidance of the gradient branch to better preserve structural information. The teacher and student networks use a similar structure, but they have different model sizes. The teacher network has more parameters and more powerful learning capabilities than the student network. With the help of knowledge distillation, our approach can improve the performance of the student network without increasing the computational burden during the testing phase. The qualitative and quantitative experi-mental results demonstrate the superiority of our method compared to the state-of-the-art methods. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:332 / 343
页数:12
相关论文
共 51 条
  • [1] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [2] Bychkovsky V, 2011, PROC CVPR IEEE, P97
  • [3] Learning to See in the Dark
    Chen, Chen
    Chen, Qifeng
    Xu, Jia
    Koltun, Vladlen
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3291 - 3300
  • [4] Chen TQ, 2016, Arxiv, DOI arXiv:1511.05641
  • [5] Exact histogram specification
    Coltuc, D
    Bolon, P
    Chassery, JM
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (05) : 1143 - 1152
  • [6] A weighted variational model for simultaneous reflectance and illumination estimation
    Fu, Xueyang
    Zeng, Delu
    Huang, Yue
    Zhang, Xiao-Ping
    Ding, Xinghao
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2782 - 2790
  • [7] A fusion-based enhancing method for weakly illuminated images
    Fu, Xueyang
    Zeng, Delu
    Huang, Yue
    Liao, Yinghao
    Ding, Xinghao
    Paisley, John
    [J]. SIGNAL PROCESSING, 2016, 129 : 82 - 96
  • [8] Image Super-Resolution Using Knowledge Distillation
    Gao, Qinquan
    Zhao, Yan
    Li, Gen
    Tong, Tong
    [J]. COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 527 - 541
  • [9] Deep Bilateral Learning for Real-Time Image Enhancement
    Gharbi, Michael
    Chen, Jiawen
    Barron, Jonathan T.
    Hasinoff, Samuel W.
    Durand, Fredo
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):
  • [10] Gu J., 2021, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2021.3126387