A novel approach to low-light image and video enhancement using adaptive dual super-resolution generative adversarial networks and top-hat filtering

被引:0
作者
Vishalakshi [1 ,2 ]
Rani, Shobha [1 ,2 ]
Hanumantharaju [1 ,2 ]
机构
[1] BMS Inst Technol & Management, Dept Elect & Commun Engn, Res Ctr, Bengaluru, India
[2] Visvesvaraya Technol Univ, Belagavi, Karnataka, India
关键词
Adaptive fusion; Generative adversarial network; Image and video enhancement; Mobile devices; Super-resolution; Top-hat transform;
D O I
10.1016/j.compeleceng.2024.110052
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Image and video enhancement under low-light conditions is challenging, as the task involves more than just brightness adjustment. Without addressing issues such as artifacts, distortions, and noise in dark regions, brightness improvement alone can worsen the quality. This paper presents a novel approach to low-light image and video enhancement based on the adaptive fusion of Dual Super-Resolution Generative Adversarial Network (DSRGAN) models, followed by Top-Hat Gradient-Domain Filtering (THGDF). A soft thresholding mechanism is used to integrate the Memory Residual Super-Resolution Generative Adversarial Network (MRSRGAN) and the Weighted Perception Super-Resolution Generative Adversarial Network (WPSRGAN). MRSRGAN enhances fine details, improving the objective performance of the image, while WPSRGAN improves overall details, enhancing the subjective performance. Top-hat gradient- domain filtering is then applied to remove artifacts, distortions, and noise in both images and videos, resulting in outstanding perception scores. The proposed approach is validated using the quality assessment metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM), and Information Fidelity Criterion (IFC). Extensive experiments conducted on publicly available source codes and databases demonstrate that the proposed method is more effective than the existing state-of-the-art techniques.
引用
收藏
页数:29
相关论文
共 57 条
[1]  
Menghani G., Efficient deep learning: A survey on making deep learning models smaller, faster, and better, ACM Comput Surv, 55, 12, pp. 1-37, (2023)
[2]  
Shoaib M., Shah B., Ei-Sappagh S., Ali A., Ullah A., Alenezi F., Et al., An advanced deep learning models-based plant disease detection: A review of recent research, Front Plant Sci, 14, (2023)
[3]  
Moein M.M., Saradar A., Rahmati K., Mousavinejad S.H.G., Bristow J., Aramali V., Et al., Predictive models for concrete properties using machine learning and deep learning approaches: A review, J Build Eng, 63, (2023)
[4]  
Guilluy W., Oudre L., Beghdadi A., Video stabilization: Overview, challenges and perspectives, Signal Process, Image Commun, 90, (2021)
[5]  
Ali S., Zhou F., Bailey A., Braden B., East J.E., Lub X., Rittscher J., A deep learning framework for quality assessment and restoration in video endoscopy, Med Image Anal, 68, (2021)
[6]  
Li C., Anwar S., Hou J., Cong R., Guo C., Ren W., Underwater image enhancement via medium transmission-guided multi-color space embedding, IEEE Trans Image Process, 30, pp. 4985-5000, (2021)
[7]  
Elharrouss O., Almaadeed N., Al-Maadeed S., A review of video surveillance systems, J Vis Commun Image Represent, 77, (2021)
[8]  
Fan L., Wang J., Chang Y., Li Y., Wang Y., Cao D., 4D mmwave radar for autonomous driving perception: A comprehensive survey, IEEE Trans Intell Veh, (2024)
[9]  
Ma J., He Y., Li F., Han L., You C., Wang B., Segment anything in medical images, Nature Commun, 15, 1, (2024)
[10]  
Suo J., Zhang W., Gong J., Yuan X., Brady D.J., Dai Q., Computational imaging and artificial intelligence: The next revolution of mobile vision, Proc IEEE, (2023)