Additional Self-Attention Transformer With Adapter for Thick Haze Removal

被引:3
|
作者
Cai, Zhenyang [1 ]
Ning, Jin [1 ]
Ding, Zhiheng [1 ]
Duo, Bin [1 ]
机构
[1] Chengdu Univ Technol, Coll Comp Sci & Cyber Secur, Chengdu 610059, Peoples R China
关键词
Image dehazing; remote sensing image (RSI); thick haze; transformer;
D O I
10.1109/LGRS.2024.3368430
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing images (RSIs) are widely used in the fields of geological resources monitoring, earthquake relief, and weather forecasting, but they are easily nullified due to haze cover. Transformer-based image dehazing model can better remove the haze in RSIs and improve the clarity of RSIs. However, due to the insufficient ability to extract detailed information, the model performs poorly in the case of thick haze. To solve this problem, this letter introduces an additional self-attention (AS) mechanism to help the model acquire more detailed information based on the existing Transformer-based image dehazing model and introduces an adapter module to improve the model's fitting capacity with newly added content. Experimental results on benchmark RSIs indicate that the proposed method yields an average improvement of 0.95 in peak signal-to-noise ratio (PSNR) and 0.6% in structural similarity index metrices (SSIM) for light haze removal. Notably, the method exhibits a significant enhancement of 1.34 in PSNR and 1.9% in SSIM for the removal of thick haze, underscoring its advantage in heavy haze conditions. The source code can be accessed via https://github.com/Eric3200C/ASTA.
引用
收藏
页码:1 / 5
页数:5
相关论文
共 50 条
  • [21] Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention
    Leem, Saebom
    Seo, Hyunseok
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 2956 - 2964
  • [22] Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
    Lee, Eunho
    Hwang, Youngbae
    IEEE ACCESS, 2024, 12 : 38672 - 38684
  • [23] Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
    Hao, Yaru
    Dong, Li
    Wei, Furu
    Xu, Ke
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 12963 - 12971
  • [24] Singularformer: Learning to Decompose Self-Attention to Linearize the Complexity of Transformer
    Wu, Yifan
    Kan, Shichao
    Zeng, Min
    Li, Min
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4433 - 4441
  • [25] RSAFormer: A method of polyp segmentation with region self-attention transformer
    Yin X.
    Zeng J.
    Hou T.
    Tang C.
    Gan C.
    Jain D.K.
    García S.
    Computers in Biology and Medicine, 2024, 172
  • [26] Nucleic Transformer: Classifying DNA Sequences with Self-Attention and Convolutions
    He, Shujun
    Gao, Baizhen
    Sabnis, Rushant
    Sun, Qing
    ACS SYNTHETIC BIOLOGY, 2023, 12 (11): : 3205 - 3214
  • [27] ET: Re -Thinking Self-Attention for Transformer Models on GPUs
    Chen, Shiyang
    Huang, Shaoyi
    Pandey, Santosh
    Li, Bingbing
    Gao, Guang R.
    Zheng, Long
    Ding, Caiwen
    Liu, Hang
    SC21: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2021,
  • [28] Top-k Self-Attention in Transformer for Video Inpainting
    Li, Guanxiao
    Zhang, Ke
    Su, Yu
    Wang, JingYu
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 1038 - 1042
  • [29] Transformer Self-Attention Change Detection Network with Frozen Parameters
    Cheng, Peiyang
    Xia, Min
    Wang, Dehao
    Lin, Haifeng
    Zhao, Zikai
    APPLIED SCIENCES-BASEL, 2025, 15 (06):
  • [30] Lightweight Vision Transformer with Spatial and Channel Enhanced Self-Attention
    Zheng, Jiahao
    Yang, Longqi
    Li, Yiying
    Yang, Ke
    Wang, Zhiyuan
    Zhou, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1484 - 1488