SST: self-attention transformer for infrared deconvolution

被引:0
|
作者
Gao, Lei [1 ,2 ]
Yan, Xiaohong [1 ,2 ]
Deng, Lizhen [3 ]
Xu, Guoxia [3 ]
Zhu, Hu [3 ]
机构
[1] Nanjing Univ Post & Telecommun, Coll Elect & Opt Engn, Nanjing 210003, Peoples R China
[2] Nanjing Univ Post & Telecommun, Coll Flexible Elect Future Technol, Nanjing 210003, Peoples R China
[3] Nanjing Univ Post & Telecommun, Sch Commun & Informat Engn, Nanjing 210003, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared spectroscopy; Sparse; Self-attention mechanism; Spectrum deconvolution; BLIND DECONVOLUTION;
D O I
10.1016/j.infrared.2024.105384
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
This study addresses the challenge of enhancing denoising in infrared spectroscopy signals and proposes a novel method based on the sparse self -attention model. In the domain of long sequence spectrum deconvolution problem, traditional Transformers confront challenges, encompassing quadratic time intricacy, heightened utilization of memory, and restrictions posed by the encoder-decoder architecture. To address these concerns, we present sparse self -attention mechanisms and extraction procedures, effectively handling the quadratic time complexity within Transformers. Additionally, a carefully designed generative decoder is utilized to alleviate the constraints of the traditional encoder-decoder architecture. Applied to the restoration of infrared spectra, our proposed method yields satisfactory results. Leveraging the sparse self -attention model, we successfully achieve enhanced denoising of infrared spectroscopy signals, providing a novel and effective approach for long sequence time series prediction. Experimental findings showcase the extensive applicability of this approach in the field of infrared spectroscopy.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
    Hao, Yaru
    Dong, Li
    Wei, Furu
    Xu, Ke
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 12963 - 12971
  • [22] Singularformer: Learning to Decompose Self-Attention to Linearize the Complexity of Transformer
    Wu, Yifan
    Kan, Shichao
    Zeng, Min
    Li, Min
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4433 - 4441
  • [23] RSAFormer: A method of polyp segmentation with region self-attention transformer
    Yin X.
    Zeng J.
    Hou T.
    Tang C.
    Gan C.
    Jain D.K.
    García S.
    Computers in Biology and Medicine, 2024, 172
  • [24] Nucleic Transformer: Classifying DNA Sequences with Self-Attention and Convolutions
    He, Shujun
    Gao, Baizhen
    Sabnis, Rushant
    Sun, Qing
    ACS SYNTHETIC BIOLOGY, 2023, 12 (11): : 3205 - 3214
  • [25] ET: Re -Thinking Self-Attention for Transformer Models on GPUs
    Chen, Shiyang
    Huang, Shaoyi
    Pandey, Santosh
    Li, Bingbing
    Gao, Guang R.
    Zheng, Long
    Ding, Caiwen
    Liu, Hang
    SC21: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2021,
  • [26] Top-k Self-Attention in Transformer for Video Inpainting
    Li, Guanxiao
    Zhang, Ke
    Su, Yu
    Wang, JingYu
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 1038 - 1042
  • [27] Additional Self-Attention Transformer With Adapter for Thick Haze Removal
    Cai, Zhenyang
    Ning, Jin
    Ding, Zhiheng
    Duo, Bin
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [28] Transformer Self-Attention Change Detection Network with Frozen Parameters
    Cheng, Peiyang
    Xia, Min
    Wang, Dehao
    Lin, Haifeng
    Zhao, Zikai
    APPLIED SCIENCES-BASEL, 2025, 15 (06):
  • [29] Lightweight Vision Transformer with Spatial and Channel Enhanced Self-Attention
    Zheng, Jiahao
    Yang, Longqi
    Li, Yiying
    Yang, Ke
    Wang, Zhiyuan
    Zhou, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1484 - 1488
  • [30] Spectral Superresolution Using Transformer with Convolutional Spectral Self-Attention
    Liao, Xiaomei
    He, Lirong
    Mao, Jiayou
    Xu, Meng
    REMOTE SENSING, 2024, 16 (10)