SST: self-attention transformer for infrared deconvolution

被引:0
|
作者
Gao, Lei [1 ,2 ]
Yan, Xiaohong [1 ,2 ]
Deng, Lizhen [3 ]
Xu, Guoxia [3 ]
Zhu, Hu [3 ]
机构
[1] Nanjing Univ Post & Telecommun, Coll Elect & Opt Engn, Nanjing 210003, Peoples R China
[2] Nanjing Univ Post & Telecommun, Coll Flexible Elect Future Technol, Nanjing 210003, Peoples R China
[3] Nanjing Univ Post & Telecommun, Sch Commun & Informat Engn, Nanjing 210003, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared spectroscopy; Sparse; Self-attention mechanism; Spectrum deconvolution; BLIND DECONVOLUTION;
D O I
10.1016/j.infrared.2024.105384
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
This study addresses the challenge of enhancing denoising in infrared spectroscopy signals and proposes a novel method based on the sparse self -attention model. In the domain of long sequence spectrum deconvolution problem, traditional Transformers confront challenges, encompassing quadratic time intricacy, heightened utilization of memory, and restrictions posed by the encoder-decoder architecture. To address these concerns, we present sparse self -attention mechanisms and extraction procedures, effectively handling the quadratic time complexity within Transformers. Additionally, a carefully designed generative decoder is utilized to alleviate the constraints of the traditional encoder-decoder architecture. Applied to the restoration of infrared spectra, our proposed method yields satisfactory results. Leveraging the sparse self -attention model, we successfully achieve enhanced denoising of infrared spectroscopy signals, providing a novel and effective approach for long sequence time series prediction. Experimental findings showcase the extensive applicability of this approach in the field of infrared spectroscopy.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] ENHANCING TONGUE REGION SEGMENTATION THROUGH SELF-ATTENTION AND TRANSFORMER BASED
    Song, Yihua
    Li, Can
    Zhang, Xia
    Liu, Zhen
    Song, Ningning
    Zhou, Zuojian
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2024, 24 (02)
  • [32] EViT: An Eagle Vision Transformer With Bi-Fovea Self-Attention
    Shi, Yulong
    Sun, Mingwei
    Wang, Yongshuai
    Ma, Jiahao
    Chen, Zengqiang
    IEEE TRANSACTIONS ON CYBERNETICS, 2025, 55 (03) : 1288 - 1300
  • [33] Re-Transformer: A Self-Attention Based Model for Machine Translation
    Liu, Huey-Ing
    Chen, Wei-Lin
    AI IN COMPUTATIONAL LINGUISTICS, 2021, 189 : 3 - 10
  • [34] Wavelet Frequency Division Self-Attention Transformer Image Deraining Network
    Fang, Siyan
    Liu, Bin
    Computer Engineering and Applications, 2024, 60 (06) : 259 - 273
  • [35] MULTI-VIEW SELF-ATTENTION BASED TRANSFORMER FOR SPEAKER RECOGNITION
    Wang, Rui
    Ao, Junyi
    Zhou, Long
    Liu, Shujie
    Wei, Zhihua
    Ko, Tom
    Li, Qing
    Zhang, Yu
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6732 - 6736
  • [36] Bottleneck Transformer model with Channel Self-Attention for skin lesion classification
    Tada, Masato
    Han, Xian-Hua
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [37] A self-attention armed optronic transformer in imaging through scattering media
    Huang, Zicheng
    Shi, Mengyang
    Ma, Jiahui
    Gao, Yesheng
    Liu, Xingzhao
    OPTICS COMMUNICATIONS, 2024, 571
  • [38] CNN-TRANSFORMER WITH SELF-ATTENTION NETWORK FOR SOUND EVENT DETECTION
    Wakayama, Keigo
    Saito, Shoichiro
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 806 - 810
  • [39] Global-Local Self-Attention Based Transformer for Speaker Verification
    Xie, Fei
    Zhang, Dalong
    Liu, Chengming
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [40] INFRARED TARGET DETECTION USING INTENSITY SALIENCY AND SELF-ATTENTION
    Zhang, Ruiheng
    Xu, Min
    Shi, Yaxin
    Fan, Jian
    Mu, Chengpo
    Xu, Lixin
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1991 - 1995