Adaptive frequency selection network for low-light image enhancement

被引:0
作者
Zhou, Shubo [1 ]
Jiang, Jingjing [1 ]
Fang, Zhijun [2 ]
Ding, Xiaoming [3 ]
Huang, Rong [1 ]
Jiang, Xue-Qin [1 ]
机构
[1] Donghua Univ, Coll Informat Sci & Technol, Shanghai 201600, Peoples R China
[2] Donghua Univ, Sch Comp Sci & Technol, Shanghai 201600, Peoples R China
[3] Tianjin Normal Univ, Coll Elect & Commun Engn, Tianjin 300387, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light image enhancement; Adaptive frequency selection; Dual-branch attention mechanism; Feature Fusion; HISTOGRAM EQUALIZATION; QUALITY ASSESSMENT;
D O I
10.1016/j.displa.2025.103136
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Low-light image enhancement (LLIE) aims to enhance image brightness and contrast across diverse low-light scenarios, including uneven illumination, extreme darkness, backlighting, and nighttime conditions. Recent transformer-based LLIE advancements have demonstrated promising performance. However, existing methods face challenges in extracting and processing high-frequency information, thereby limiting their capacity to preserve fine details and mitigate artifacts. To address these challenges, we propose an adaptive frequency selection network (AFSNet) for LLIE. In this network, we introduce a frequency-selective transformer (FSFormer), which employs a novel frequency modulation strategy to adaptively adjust distinct frequency components within images. The core module of the FSFormer is the dual-branch attention module (DBAM), which involves a semantic branch and a frequency branch. The semantic branch uses the multi-dconv head transposed attention to capture contextual and overall structure information. The frequency branch incorporates a dual frequency modulation module (DFMM) to achieve region-adaptive frequency decomposition and enhancement. Additionally, a frequency-aware feature fusion module (Freqfusion) facilitates feature fusion across different scales. Extensive experimental results demonstrate that our approach achieves competitive performance across multiple synthetic and real-world datasets compared to state-of-the-art (SOTA) methods.
引用
收藏
页数:12
相关论文
共 86 条
[1]   A dynamic histogram equalization for image contrast enhancement [J].
Abdullah-Al-Wadud, M. ;
Kabir, Md. Hasanul ;
Dewan, M. Ali Akber ;
Chae, Oksam .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2007, 53 (02) :593-600
[2]   Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement [J].
Cai, Yuanhao ;
Bian, Hao ;
Lin, Jing ;
Wang, Haoqian ;
Timofte, Radu ;
Zhang, Yulun .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :12470-12479
[3]   Attention-Guided Neural Networks for Full-Reference and No-Reference Audio-Visual Quality Assessment [J].
Cao, Yuqin ;
Min, Xiongkuo ;
Sun, Wei ;
Zhai, Guangtao .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 :1882-1896
[4]  
Chen GG, 2024, Arxiv, DOI arXiv:2404.13537
[5]   BGFlow: Brightness-guided normalizing flow for low-light image enhancement [J].
Chen, Jiale ;
Lian, Qiusheng ;
Shi, Baoshun .
DISPLAYS, 2024, 85
[6]  
Chen Leiming, 2025, IEEE Transactions on Artificial Intelligence, V6, P301, DOI 10.1109/TAI.2024.3355362
[7]   HINet: Half Instance Normalization Network for Image Restoration [J].
Chen, Liangyu ;
Lu, Xin ;
Zhang, Jie ;
Chu, Xiaojie ;
Chen, Chengpeng .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :182-192
[8]  
Chen W, 2018, arXiv
[9]   PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement [J].
Dang, Jiachen ;
Zhong, Yong ;
Qin, Xiaolin .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 241
[10]  
Duan H., 2022, IEEE Trans. Image Process. (TIP)