TCPCNet: a transformer-CNN parallel cooperative network for low-light image enhancement

被引:3
|
作者
Zhang, Wanjun [1 ]
Ding, Yujie [2 ]
Zhang, Miaohui [2 ]
Zhang, Yonghua [2 ]
Cao, Lvchen [2 ]
Huang, Ziqing [2 ]
Wang, Jun [2 ]
机构
[1] Henan Univ, Sch Comp & Informat Engn, Kaifeng 475001, Peoples R China
[2] Henan Univ, Sch Artificial Intelligence, Zhengzhou 450046, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-light image enhancement; Transformer; Transformer-CNN; ILLUMINATION;
D O I
10.1007/s11042-023-17527-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, deep learning has made impressive achievements in low-light image enhancement. Most existing deep learning-based methods use convolutional neural networks (CNN) by stacking network depth and modifying network architecture to improve feature extraction capabilities and restore degraded images. However, these methods have obvious defects. Although CNN excels at extracting local features, its small receptive field is unable to capture the global brightness, leading to overexposure. The Transformer model from natural language processing has recently produced positive outcomes in a variety of computer vision issues thanks to its excellent global modeling capabilities. However, its complex modeling method makes it difficult to capture local details and takes up many computing resources, making it challenging to apply to the enhancement of low-light images, especially high-resolution images. Based on deep convolution and Transformer characteristics, this paper proposes a Transformer-CNN Parallel Cooperative Network (TCPCNet), which supplements image details and local brightness while ensuring global brightness control. We also changed the calculation method of the traditional Transformer to be applied to enhance high-resolution low-light images without affecting performance. Extensive experiments on public datasets show that the proposed TCPCNet achieves comparable performance against the state-of-the-art approaches.
引用
收藏
页码:52957 / 52972
页数:16
相关论文
共 50 条
  • [41] Low-Light Image Enhancement via a Deep Hybrid Network
    Ren, Wenqi
    Liu, Sifei
    Ma, Lin
    Xu, Qianqian
    Xu, Xiangyu
    Cao, Xiaochun
    Du, Junping
    Yang, Ming-Hsuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) : 4364 - 4375
  • [42] Deep Pyramid Network for Low-Light Endoscopic Image Enhancement
    Yue, Guanghui
    Gao, Jie
    Cong, Runmin
    Zhou, Tianwei
    Li, Leida
    Wang, Tianfu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3834 - 3845
  • [43] Wavelet-based enhancement network for low-light image
    Hu, Xiaopeng
    Liu, Kang
    Yin, Xiangchen
    Gao, Xin
    Jiang, Pingsheng
    Qian, Xu
    DISPLAYS, 2025, 87
  • [44] Frequency-aware network for low-light image enhancement
    Shang, Kai
    Shao, Mingwen
    Qiao, Yuanjian
    Liu, Huan
    COMPUTERS & GRAPHICS-UK, 2024, 118 : 210 - 219
  • [45] Adversarial Context Aggregation Network for Low-Light Image Enhancement
    Shin, Yong-Goo
    Sagong, Min-Cheol
    Yeo, Yoon-Jae
    Ko, Sung-Jea
    2018 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2018, : 617 - 621
  • [46] A deep Retinex network for underwater low-light image enhancement
    Kai Ji
    Weimin Lei
    Wei Zhang
    Machine Vision and Applications, 2023, 34
  • [47] RECURRENT ATTENTIVE DECOMPOSITION NETWORK FOR LOW-LIGHT IMAGE ENHANCEMENT
    Gao, Haoyu
    Zhang, Lin
    Zhang, Shunli
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3818 - 3822
  • [48] Transformer-CNN hybrid network for crowd counting
    Yu J.
    Yu Y.
    Qian J.
    Han X.
    Zhu F.
    Zhu Z.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 10773 - 10785
  • [49] Low-Light Image Enhancement Using a Simple Network Structure
    Matsui, Takuro
    Ikehara, Masaaki
    IEEE ACCESS, 2023, 11 : 65507 - 65516
  • [50] Feature spatial pyramid network for low-light image enhancement
    Song, Xijuan
    Huang, Jijiang
    Cao, Jianzhong
    Song, Dawei
    VISUAL COMPUTER, 2023, 39 (01): : 489 - 499