Cartoon-texture guided network for low-light image enhancement

被引:4
作者
Shi, Baoshun [1 ,2 ]
Zhu, Chunzi [1 ,2 ]
Li, Lingyan [3 ]
Huang, Huagui [4 ]
机构
[1] Yanshan Univ, Sch Informat Sci & Engn, Qinhuang Dao 066004, Hebei, Peoples R China
[2] Hebei Key Lab Informat Transmiss & Signal Proc, Qinhuang Dao 066004, Hebei, Peoples R China
[3] Yanshan Univ, Sch Econ & Management, Qinhuangdao 066004, Hebei, Peoples R China
[4] Yanshan Univ, Sch Mech Engn, Qinhuangdao 066004, Hebei, Peoples R China
基金
中国国家自然科学基金; 国家教育部科学基金资助;
关键词
Low-light image enhancement; Cartoon and texture components; Image decomposition; Normalizing flow; Frequency domain network; CONTRAST ENHANCEMENT;
D O I
10.1016/j.dsp.2023.104271
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recovering normal-exposure images from low-light images is a challenging task. Recent works have built a great deal of deep learning methods to address this task. Nevertheless, most of them treat cartoon and texture components in the same way, resulting in a loss of details. Recent effort, i.e. unfolding total variation network (UTVNet), is proposed, which recovers normal-light image by roughly decomposing the image into a noise -free smoothing layer and a detail layer using total variation (TV) regularization, and then processes the two components in different ways. However, its enhanced image exhibits color distortion owing to the limited representation ability of the TV model. To address this limitation, we design a cartoon-texture guided network named CatNet for low-light image enhancement. CatNet uses a cartoon-guided normalizing flow to retain cartoon information and an elaborated frequency domain attention mechanism in U-Net denoted as FAU-Net to recover texture information. Concretely, the ground-truth image is decomposed into cartoon and texture components to guide the corresponding recovery modules training, respectively. We also design a hybrid loss in the spatial and frequency domains to train the CatNet. Compared to state-of-the-art methods, our method gets better results, obtaining richer colors and more details. The source code and datasets have been made publicly available at https://github .com /shibaoshun /CatNet.
引用
收藏
页数:12
相关论文
共 46 条
[1]  
Albu F, 2015, 2015 IEEE 5TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - BERLIN (ICCE-BERLIN), P496, DOI 10.1109/ICCE-Berlin.2015.7391320
[2]   ONE SCAN SHADOW COMPENSATION AND VISUAL ENHANCEMENT OF COLOR IMAGES [J].
Albu, Felix ;
Vertan, Constantin ;
Florea, Corneliu ;
Drimbarean, Alexandru .
2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2009, :3133-3136
[3]   A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications [J].
Alzubaidi, Laith ;
Bai, Jinshuai ;
Al-Sabaawi, Aiman ;
Santamaria, Jose ;
Albahri, A. S. ;
Al-dabbagh, Bashar Sami Nayyef ;
Fadhel, Mohammed. A. A. ;
Manoufali, Mohamed ;
Zhang, Jinglan ;
Al-Timemy, Ali. H. H. ;
Duan, Ye ;
Abdullah, Amjed ;
Farhan, Laith ;
Lu, Yi ;
Gupta, Ashish ;
Albu, Felix ;
Abbosh, Amin ;
Gu, Yuantong .
JOURNAL OF BIG DATA, 2023, 10 (01)
[4]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[5]   Learning to See in the Dark [J].
Chen, Chen ;
Chen, Qifeng ;
Xu, Jia ;
Koltun, Vladlen .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3291-3300
[6]   Low-light image enhancement based on sharpening-smoothing image filter [J].
Demir, Y. ;
Kaplan, N. H. .
DIGITAL SIGNAL PROCESSING, 2023, 138
[7]  
Dinh L, 2015, Arxiv, DOI arXiv:1410.8516
[8]   HALF WAVELET ATTENTION ON M-NET plus FOR LOW-LIGHT IMAGE ENHANCEMENT [J].
Fan, Chi-Mao ;
Liu, Tsung-Jung ;
Liu, Kuan-Hsien .
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, :3878-3882
[9]   Directed color transfer for low-light image enhancement [J].
Florea, Laura ;
Florea, Corneliu .
DIGITAL SIGNAL PROCESSING, 2019, 93 :1-12
[10]   LIME: Low-Light Image Enhancement via Illumination Map Estimation [J].
Guo, Xiaojie ;
Li, Yu ;
Ling, Haibin .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (02) :982-993