Additional Self-Attention Transformer With Adapter for Thick Haze Removal

被引:3
|
作者
Cai, Zhenyang [1 ]
Ning, Jin [1 ]
Ding, Zhiheng [1 ]
Duo, Bin [1 ]
机构
[1] Chengdu Univ Technol, Coll Comp Sci & Cyber Secur, Chengdu 610059, Peoples R China
关键词
Image dehazing; remote sensing image (RSI); thick haze; transformer;
D O I
10.1109/LGRS.2024.3368430
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing images (RSIs) are widely used in the fields of geological resources monitoring, earthquake relief, and weather forecasting, but they are easily nullified due to haze cover. Transformer-based image dehazing model can better remove the haze in RSIs and improve the clarity of RSIs. However, due to the insufficient ability to extract detailed information, the model performs poorly in the case of thick haze. To solve this problem, this letter introduces an additional self-attention (AS) mechanism to help the model acquire more detailed information based on the existing Transformer-based image dehazing model and introduces an adapter module to improve the model's fitting capacity with newly added content. Experimental results on benchmark RSIs indicate that the proposed method yields an average improvement of 0.95 in peak signal-to-noise ratio (PSNR) and 0.6% in structural similarity index metrices (SSIM) for light haze removal. Notably, the method exhibits a significant enhancement of 1.34 in PSNR and 1.9% in SSIM for the removal of thick haze, underscoring its advantage in heavy haze conditions. The source code can be accessed via https://github.com/Eric3200C/ASTA.
引用
收藏
页码:1 / 5
页数:5
相关论文
共 50 条
  • [1] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [2] Relative molecule self-attention transformer
    Maziarka, Lukasz
    Majchrowski, Dawid
    Danel, Tomasz
    Gainski, Piotr
    Tabor, Jacek
    Podolak, Igor
    Morkisz, Pawel
    Jastrzebski, Stanislaw
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [3] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [4] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145
  • [5] SST: self-attention transformer for infrared deconvolution
    Gao, Lei
    Yan, Xiaohong
    Deng, Lizhen
    Xu, Guoxia
    Zhu, Hu
    INFRARED PHYSICS & TECHNOLOGY, 2024, 140
  • [6] Lite Vision Transformer with Enhanced Self-Attention
    Yang, Chenglin
    Wang, Yilin
    Zhang, Jianming
    Zhang, He
    Wei, Zijun
    Lin, Zhe
    Yuille, Alan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11988 - 11998
  • [7] Synthesizer: Rethinking Self-Attention for Transformer Models
    Tay, Yi
    Bahri, Dara
    Metzler, Donald
    Juan, Da-Cheng
    Zhao, Zhe
    Zheng, Che
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7192 - 7203
  • [8] SGSAFormer: Spike Gated Self-Attention Transformer and Temporal Attention
    Gao, Shouwei
    Qin, Yu
    Zhu, Ruixin
    Zhao, Zirui
    Zhou, Hao
    Zhu, Zihao
    ELECTRONICS, 2025, 14 (01):
  • [9] Real-World Non-Homogeneous Haze Removal by Sliding Self-Attention Wavelet Network
    Feng, Yuxin
    Meng, Xiaozhe
    Zhou, Fan
    Lin, Weisi
    Su, Zhuo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5470 - 5485
  • [10] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091