Joint semantic-aware and noise suppression for low-light image enhancement without reference

被引:2
作者
Zhang, Meng [1 ]
Liu, Lidong [1 ]
Jiang, Donghua [2 ]
机构
[1] Changan Univ, Sch Informat Engn, Xian 710018, Peoples R China
[2] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light enhancement; Image denoising; Semantic feature; Deep learning; QUALITY ASSESSMENT; ILLUMINATION; COLOR;
D O I
10.1007/s11760-023-02613-z
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Digital images captured from the real world are inevitably affected by light and noise. Moreover, the downstream high-level visual tasks, such as the computer vision-based object detection and semantic segmentation can be improved by adjusting the visibility of dark scenes. Although the approaches built upon deep learning have achieved great success in the low-light enhancement field, the significant influence of semantic features and noise is always overlooked. Therefore, a new unsupervised optical enhancement model based on semantic perception and noise suppression is proposed in this paper. First, the enhancement factor mapping is adopted to extract the low-light image features. Then, the progressive curve enhancement is utilized to adjust the curve. Compared with the fully supervised learning method, the well-built network is trained with unpaired images in this paper. Second, under the guidance of semantic feature embedding module, the low-light enhancement can preserve rich semantic information. Additionally, the self-supervised noise removal module is employed to effectively avoid noise interference and elevate image quality. Experimental outcomes and analysis indicate that the proposed scheme can not only generate the enhanced images of visually pleasing and artifact free, but also be applied to multiple downstream visual tasks.
引用
收藏
页码:3847 / 3855
页数:9
相关论文
共 39 条
[1]   Semantic Segmentation Guided Real-World Super-Resolution [J].
Aakerberg, Andreas ;
Johansen, Anders S. ;
Nasrollahi, Kamal ;
Moeslund, Thomas B. .
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, :449-458
[2]   The 2018 PIRM Challenge on Perceptual Image Super-Resolution [J].
Blau, Yochai ;
Mechrez, Roey ;
Timofte, Radu ;
Michaeli, Tomer ;
Zelnik-Manor, Lihi .
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 :334-355
[3]   RNON: image inpainting via repair network and optimization network [J].
Chen, Yuantao ;
Xia, Runlong ;
Zou, Ke ;
Yang, Kai .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (09) :2945-2961
[4]   FFTI: Image inpainting algorithm via features fusion and two-steps inpainting [J].
Chen, Yuantao ;
Xia, Runlong ;
Zou, Ke ;
Yang, Kai .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 91
[5]   MFFN: image super-resolution via multi-level features fusion network [J].
Chen, Yuantao ;
Xia, Runlong ;
Yang, Kai ;
Zou, Ke .
VISUAL COMPUTER, 2024, 40 (02) :489-504
[6]   CERL: A Unified Optimization Framework for Light Enhancement With Realistic Noise [J].
Chen, Zeyuan ;
Jiang, Yifan ;
Liu, Dong ;
Wang, Zhangyang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :4162-4172
[7]   A simple and effective histogram equalization approach to image enhancement [J].
Cheng, HD ;
Shi, XJ .
DIGITAL SIGNAL PROCESSING, 2004, 14 (02) :158-170
[8]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[9]   Object Detection Method for High Resolution Remote Sensing Imagery Based on Convolutional Neural Networks with Optimal Object Anchor Scales [J].
Dong, Zhipeng ;
Liu, Yanxiong ;
Feng, Yikai ;
Wang, Yanli ;
Xu, Wenxue ;
Chen, Yilan ;
Tang, Qiuhua .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2022, 43 (07) :2698-2719
[10]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149