A Novel Hybrid Architecture With Fast Lightweight Encoder and Transformer Under Attention Fusion for the Enhancement of Sand Dust and Haze Image Restoration

被引:0
作者
Masood, Muhammad Khawaja Kashif [1 ,2 ]
Nava Baro, Enrique [1 ]
Otero, Pablo [1 ]
机构
[1] Univ Malaga, Inst Ocean Engn Res, Malaga 29071, Spain
[2] Qassim Univ, Elect Engn Dept, Buraydah 52571, Saudi Arabia
关键词
Convolution; Image restoration; Transformers; Feature extraction; Image color analysis; Computer vision; Generative adversarial networks; Training; Computer architecture; Computational efficiency; Sand dust and haze degraded images; color distortion; low contrast; color cast; vision transformer; encoder; QUALITY ASSESSMENT; ALGORITHMS;
D O I
10.1109/ACCESS.2025.3570983
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Outdoor weather conditions such as haze, fog, sand dust, and low light significantly degrade image quality, causing color distortions, low contrast, and poor visibility. In spite of the significant importance of restoring such degraded images, challenges still exist in haze removal and sand dust image enhancement and other restoration tasks, making this field relatively underexplored. While Encoder-Decoder-based neural networks have shown noticeable improvements in image restoration, their ability to further improve the image quality still remains constrained. Recent advancements in vision transformers and self-attention mechanisms have achieved remarkable success in various computer vision tasks. However, directly applying Vision Transformers for image restoration presents serious challenges, including feature extraction between local and global representations. This research aims to address these limitations by restoring both sand dust and haze degraded images to a more natural and visually realistic appearance, ensuring enhanced visibility, balanced colors, and refined details. We propose a novel hybrid architecture that combines depth-wise local feature extraction using lightweight Encoders with global feature extraction via Vision Transformers. These features are fused through an attention fusion mechanism, ensuring seamless interaction between local and global feature representations. Finally, a single lightweight Decoder reconstructs a high-quality restored image that closely matches the ground truth. The proposed method effectively reduces feature inconsistency between Vision Transformer-based global features and lightweight encoder-based local features, leading to state-of-the-art performance in both synthetic and real-world sand dust and haze-degraded images. Extensive evaluations show that our proposed method outperforms all previously existing image restoration methods, delivering improved visibility, realistic textures, and superior image quality. Degraded images exhibiting varying degrees of color cast from mild to severe are evaluated both qualitatively and quantitatively. In addition, a comparison of training and testing time and the novel Energy Efficiency Index (EEI) analysis is assessed. The results show that the proposed method outperforms all previous conventional and advanced deep learning methods in terms of visual quality, evaluation metrics, training and testing time, and novel EEI.
引用
收藏
页码:86874 / 86891
页数:18
相关论文
共 61 条
[1]   Quality assessment tool for performance measurement of image contrast enhancement methods [J].
Abdoli, Mohsen ;
Nasiri, Fatemeh ;
Brault, Patrice ;
Ghanbari, Mohammad .
IET IMAGE PROCESSING, 2019, 13 (05) :833-842
[2]  
Alghamdi M. S. S., 2023, IEEE Access, V11, P38123, DOI [10.1109/ACCESS.2023.3245678, DOI 10.1109/ACCESS.2023.3245678]
[3]   NTIRE 2020 Challenge on NonHomogeneous Dehazing [J].
Ancuti, Codruta O. ;
Ancuti, Cosmin ;
Vasluianu, Florin-Alexandru ;
Timofte, Radu ;
Liu, Jing ;
Wu, Haiyan ;
Xie, Yuan ;
Qu, Yanyun ;
Ma, Lizhuang ;
Huang, Ziling ;
Deng, Qili ;
Chao, Ju-Chin ;
Yang, Tsung-Shan ;
Chen, Peng-Wen ;
Hsu, Po-Min ;
Liao, Tzu-Yi ;
Sun, Chung-En ;
Wu, Pei-Yuan ;
Do, Jeonghyeok ;
Park, Jongmin ;
Kim, Munchurl ;
Metwaly, Kareem ;
Li, Xuelu ;
Guo, Tiantong ;
Monga, Vishal ;
Yu, Mingzhao ;
Cherukuri, Venkateswararao ;
Chuang, Shiue-Yuan ;
Lin, Tsung-Nan ;
Lee, David ;
Chang, Jerome ;
Wang, Zhan-Han ;
Chang, Yu-Bang ;
Lin, Chang-Hong ;
Dong, Yu ;
Zhou, Hongyu ;
Kong, Xiangzhen ;
Das, Sourya Dipta ;
Dutta, Saikat ;
Zhao, Xuan ;
Ouyang, Bing ;
Estrada, Dennis ;
Wang, Meiqi ;
Su, Tianqi ;
Chen, Siyi ;
Sun, Bangyong ;
de Dravo, Vincent Whannou ;
Yu, Zhe ;
Narang, Pratik ;
Mehra, Aryan .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :2029-2044
[4]  
Ancuti CO, 2019, IEEE IMAGE PROC, P1014, DOI [10.1109/icip.2019.8803046, 10.1109/ICIP.2019.8803046]
[5]   Self-Guided Image Dehazing Using Progressive Feature Fusion [J].
Bai, Haoran ;
Pan, Jinshan ;
Xiang, Xinguang ;
Tang, Jinhui .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :1217-1229
[6]   A review of image denoising algorithms, with a new one [J].
Buades, A ;
Coll, B ;
Morel, JM .
MULTISCALE MODELING & SIMULATION, 2005, 4 (02) :490-530
[7]  
Chen W., 2023, IEEE Trans. Circuits Syst. Video Technol., V33, P897, DOI [10.1109/TCSVT.2023.3248754, DOI 10.1109/TCSVT.2023.3248754]
[8]  
Chen Y., 2021, J. Electron. Imag., V30, P1
[9]   LightweightDeRain: learning a lightweight multi-scale high-order feedback network for single image de-raining [J].
Chen, Zheng ;
Bi, Xiaojun ;
Zhang, Yu ;
Yue, Jianyu ;
Wang, Haibo .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07) :5431-5448
[10]   Diffusion-Based Adaptation for Classification of Unknown Degraded Images [J].
Daultani, Dinesh ;
Tanaka, Masayuki ;
Okutomi, Masatoshi ;
Endo, Kazuki .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2024, :5982-5991