Low-light image enhancement network based on central difference convolution

被引:0
作者
Chen, Yong [1 ,2 ]
Chen, Shangming [1 ]
Liu, Huanlin [3 ]
Xiong, Hangying [3 ]
Zhang, Yourui [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Automat, Chongqing 400065, Peoples R China
[2] Chongqing Inst Engn, Chongqing 400056, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Image processing; Low-light image enhancement; Central difference convolution; Feature attention; GAMMA-CORRECTION; CONTRAST;
D O I
10.1016/j.engappai.2025.111492
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Since the convolutional neural networks and transformers used in existing low-light image enhancement methods were prone to ignore high-frequency information, resulting in blurred details of the enhanced image, this affected the performance of computer vision tasks at night. Therefore, we propose a novel low-light image enhancement network based on central difference convolution (CDCLNet). This method uses traditional image processing methods to help the network extract high-frequency information. Specifically, firstly, in order to fully expose the hidden high-frequency details, the proposed method uses the multi-exposure strategy based on bright and dark masks to expose the image to different levels. Secondly, the complementary information between multi-exposure images is fused through the first-stage network. Finally, the second-stage network suppresses the amplified noise and enhances the details. In addition, We design a central difference convolution module (CDCM) with channel attention to adaptively extract gradient-level detailed features according to the need of the two-stage network. In order to make the network notice illumination non-uniformity, we propose a multi-scale feature attention module (MFAM), which extracts multi-scale features in each channel and generates channel-specific attention maps. Experiments on four public datasets show that the proposed method can enhance the details more effectively than mainstream methods, and achieves the highest structural similarity index on two paired datasets, with an average value of 0.899.
引用
收藏
页数:13
相关论文
共 47 条
[41]   CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement [J].
Xu, Han ;
Zhang, Hao ;
Yi, Xunpeng ;
Ma, Jiayi .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) :3610-3632
[42]   A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework [J].
Ying, Zhenqiang ;
Li, Ge ;
Ren, Yurui ;
Wang, Ronggang ;
Wang, Wenmin .
COMPUTER ANALYSIS OF IMAGES AND PATTERNS: 17TH INTERNATIONAL CONFERENCE, CAIP 2017, PT II, 2017, 10425 :36-46
[43]   Searching Central Difference Convolutional Networks for Face Anti-Spoofing [J].
Yu, Zitong ;
Zhao, Chenxu ;
Wang, Zezheng ;
Qin, Yunxiao ;
Su, Zhuo ;
Li, Xiaobai ;
Zhou, Feng ;
Zhao, Guoying .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5294-5304
[44]   The Unreasonable Effectiveness of Deep Features as a Perceptual Metric [J].
Zhang, Richard ;
Isola, Phillip ;
Efros, Alexei A. ;
Shechtman, Eli ;
Wang, Oliver .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :586-595
[45]   ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [J].
Zhang, Xiangyu ;
Zhou, Xinyu ;
Lin, Mengxiao ;
Sun, Ran .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6848-6856
[46]   Kindling the Darkness: A Practical Low-light Image Enhancer [J].
Zhang, Yonghua ;
Zhang, Jiawan ;
Guo, Xiaojie .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :1632-1640
[47]   Better Than Reference in Low-Light Image Enhancement: Conditional Re-Enhancement Network [J].
Zhang, Yu ;
Di, Xiaoguang ;
Zhang, Bin ;
Ji, Ruihang ;
Wang, Chunhui .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :759-772