Pre-trained low-light image enhancement transformer

被引:1
|
作者
Zhang, Jingyao [1 ,2 ]
Hao, Shijie [1 ,2 ]
Rao, Yuan [3 ]
机构
[1] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei, Peoples R China
[3] Anhui Agr Univ, Sch Informat & Artificial Intelligence, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
image enhancement; image processing; REPRESENTATION;
D O I
10.1049/ipr2.13076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-light image enhancement is a longstanding challenge in low-level vision, as images captured in low-light conditions often suffer from significant aesthetic quality flaws. Recent methods based on deep neural networks have made impressive progress in this area. In contrast to mainstream convolutional neural network (CNN)-based methods, an effective solution inspired by the transformer, which has shown impressive performance in various tasks, is proposed. This solution is centred around two key components. The first is an image synthesis pipeline, and the second is a powerful transformer-based pre-trained model, known as the low-light image enhancement transformer (LIET). The image synthesis pipeline includes illumination simulation and realistic noise simulation, enabling the generation of more life-like low-light images to overcome the issue of data scarcity. LIET combines streamlined CNN-based encoder-decoders with a transformer body, efficiently extracting global and local contextual features at a relatively low computational cost. The extensive experiments show that this approach is highly competitive with current state-of-the-art methods. The codes have been released and are available at . An effective transformer-based low-light image enhancement solution called low-light image enhancement transformer, pre-trained on a large synthesized low/normal light image dataset, which achieves state-of-the-art performance, is proposed. The model combines convolutional neural network and transformer architectures for robust feature extraction at low cost and improved generalization capability. image
引用
收藏
页码:1967 / 1984
页数:18
相关论文
共 50 条
  • [41] PART: Pre-trained Authorship Representation Transformer
    Huertas-Tato, Javier
    Martin, Alejandro
    Camacho, David
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14
  • [42] Deep Compression of Pre-trained Transformer Models
    Wang, Naigang
    Liu, Chi-Chun
    Venkataramani, Swagath
    Sen, Sanchari
    Chen, Chia-Yu
    El Maghraoui, Kaoutar
    Srinivasan, Vijayalakshmi
    Chang, Leland
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [43] Integrally Pre-Trained Transformer Pyramid Networks
    Tian, Yunjie
    Xie, Lingxi
    Wang, Zhaozhi
    Wei, Longhui
    Zhang, Xiaopeng
    Jiao, Jianbin
    Wang, Yaowei
    Tian, Qi
    Ye, Qixiang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18610 - 18620
  • [44] Continuous detail enhancement framework for low-light image enhancement☆
    Liu, Kang
    Xv, Zhihao
    Yang, Zhe
    Liu, Lian
    Li, Xinyu
    Hu, Xiaopeng
    DISPLAYS, 2025, 88
  • [45] SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques
    Min He
    Rugang Wang
    Mingyang Zhang
    Feiyang Lv
    Yuanyuan Wang
    Feng Zhou
    Xuesheng Bian
    Scientific Reports, 15 (1)
  • [46] Low-light wheat image enhancement using an explicit inter-channel sparse transformer
    Wang, Yu
    Wang, Fei
    Li, Kun
    Feng, Xuping
    Hou, Wenhui
    Liu, Lu
    Chen, Liqing
    He, Yong
    Wang, Yuwei
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 224
  • [47] Illumination-Aware Low-Light Image Enhancement with Transformer and Auto-Knee Curve
    Pan, Jinwang
    Liu, Xianming
    Bai, Yuanchao
    Zhai, Deming
    Jiang, Junjun
    Zhao, Debin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (08)
  • [48] Low-light images enhancement via a dense transformer network
    Huang, Yi
    Fu, Gui
    Ren, Wanchun
    Tu, Xiaoguang
    Feng, Ziliang
    Liu, Bokai
    Liu, Jianhua
    Zhou, Chao
    Liu, Yuang
    Zhang, Xiaoqiang
    DIGITAL SIGNAL PROCESSING, 2024, 148
  • [49] Transformer-Based Multi-scale Optimization Network for Low-Light Image Enhancement
    Niu Y.
    Lin X.
    Xu H.
    Li Y.
    Chen Y.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2023, 36 (06): : 511 - 529
  • [50] Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement
    Cai, Yuanhao
    Bian, Hao
    Lin, Jing
    Wang, Haoqian
    Timofte, Radu
    Zhang, Yulun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12470 - 12479