SST: self-attention transformer for infrared deconvolution

被引:0
|
作者
Gao, Lei [1 ,2 ]
Yan, Xiaohong [1 ,2 ]
Deng, Lizhen [3 ]
Xu, Guoxia [3 ]
Zhu, Hu [3 ]
机构
[1] Nanjing Univ Post & Telecommun, Coll Elect & Opt Engn, Nanjing 210003, Peoples R China
[2] Nanjing Univ Post & Telecommun, Coll Flexible Elect Future Technol, Nanjing 210003, Peoples R China
[3] Nanjing Univ Post & Telecommun, Sch Commun & Informat Engn, Nanjing 210003, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared spectroscopy; Sparse; Self-attention mechanism; Spectrum deconvolution; BLIND DECONVOLUTION;
D O I
10.1016/j.infrared.2024.105384
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
This study addresses the challenge of enhancing denoising in infrared spectroscopy signals and proposes a novel method based on the sparse self -attention model. In the domain of long sequence spectrum deconvolution problem, traditional Transformers confront challenges, encompassing quadratic time intricacy, heightened utilization of memory, and restrictions posed by the encoder-decoder architecture. To address these concerns, we present sparse self -attention mechanisms and extraction procedures, effectively handling the quadratic time complexity within Transformers. Additionally, a carefully designed generative decoder is utilized to alleviate the constraints of the traditional encoder-decoder architecture. Applied to the restoration of infrared spectra, our proposed method yields satisfactory results. Leveraging the sparse self -attention model, we successfully achieve enhanced denoising of infrared spectroscopy signals, providing a novel and effective approach for long sequence time series prediction. Experimental findings showcase the extensive applicability of this approach in the field of infrared spectroscopy.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [2] Relative molecule self-attention transformer
    Maziarka, Lukasz
    Majchrowski, Dawid
    Danel, Tomasz
    Gainski, Piotr
    Tabor, Jacek
    Podolak, Igor
    Morkisz, Pawel
    Jastrzebski, Stanislaw
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [3] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [4] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145
  • [5] Lite Vision Transformer with Enhanced Self-Attention
    Yang, Chenglin
    Wang, Yilin
    Zhang, Jianming
    Zhang, He
    Wei, Zijun
    Lin, Zhe
    Yuille, Alan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11988 - 11998
  • [6] Synthesizer: Rethinking Self-Attention for Transformer Models
    Tay, Yi
    Bahri, Dara
    Metzler, Donald
    Juan, Da-Cheng
    Zhao, Zhe
    Zheng, Che
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7192 - 7203
  • [7] SGSAFormer: Spike Gated Self-Attention Transformer and Temporal Attention
    Gao, Shouwei
    Qin, Yu
    Zhu, Ruixin
    Zhao, Zirui
    Zhou, Hao
    Zhu, Zihao
    ELECTRONICS, 2025, 14 (01):
  • [8] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091
  • [9] Local self-attention in transformer for visual question answering
    Shen, Xiang
    Han, Dezhi
    Guo, Zihan
    Chen, Chongqing
    Hua, Jie
    Luo, Gaofeng
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
  • [10] Local self-attention in transformer for visual question answering
    Xiang Shen
    Dezhi Han
    Zihan Guo
    Chongqing Chen
    Jie Hua
    Gaofeng Luo
    Applied Intelligence, 2023, 53 : 16706 - 16723