Spectral Superresolution Using Transformer with Convolutional Spectral Self-Attention

被引:3
|
作者
Liao, Xiaomei [1 ]
He, Lirong [2 ]
Mao, Jiayou [2 ]
Xu, Meng [2 ]
机构
[1] Shenzhen Univ, Coll Life Sci & Oceanog, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
hyperspectral image; spectral superresolution; transformer; convolutional neural network; self-attention; NETWORK;
D O I
10.3390/rs16101688
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Hyperspectral images (HSI) find extensive application across numerous domains of study. Spectral superresolution (SSR) refers to reconstructing HSIs from readily available RGB images using the mapping relationships between RGB images and HSIs. In recent years, convolutional neural networks (CNNs) have become widely adopted in SSR research, primarily because of their exceptional ability to extract features. However, most current CNN-based algorithms are weak in terms of extracting the spectral features of HSIs. While certain algorithms can reconstruct HSIs through the fusion of spectral and spatial data, their practical effectiveness is hindered by their substantial computational complexity. In light of these challenges, we propose a lightweight network, Transformer with convolutional spectral self-attention (TCSSA), for SSR. TCSSA comprises a CNN-Transformer encoder and a CNN-Transformer decoder, in which the convolutional spectral self-attention blocks (CSSABs) are the basic modules. Multiple cascaded encoding and decoding modules within TCSSA facilitate the efficient extraction of spatial and spectral contextual information from HSIs. The convolutional spectral self-attention (CSSA) as the basic unit of CSSAB combines CNN with self-attention in the transformer, effectively extracting both spatial local features and global spectral features from HSIs. Experimental validation of TCSSA's effectiveness is performed on three distinct datasets: GF5 for remote sensing images along with CAVE and NTIRE2022 for natural images. The experimental results demonstrate that the proposed method achieves a harmonious balance between reconstruction performance and computational complexity.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Group-spectral superposition and position self-attention transformer for hyperspectral image classification
    Zhang, Weitong
    Hu, Mingwei
    Hou, Sihan
    Shang, Ronghua
    Feng, Jie
    Xu, Songhua
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [2] A spatial-spectral fusion convolutional transformer network with contextual multi-head self-attention for hyperspectral image classification
    Wang, Wuli
    Sun, Qi
    Zhang, Li
    Ren, Peng
    Wang, Jianbu
    Ren, Guangbo
    Liu, Baodi
    NEURAL NETWORKS, 2025, 187
  • [3] Convolutional Self-Attention Networks
    Yang, Baosong
    Wang, Longyue
    Wong, Derek F.
    Chao, Lidia S.
    Tu, Zhaopeng
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 4040 - 4045
  • [4] Weighted residual self-attention graph-based transformer for spectral-spatial hyperspectral image classification
    Zu, Baokai
    Wang, Hongyuan
    Li, Jianqiang
    He, Ziping
    Li, Yafang
    Yin, Zhixian
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2023, 44 (03) : 852 - 877
  • [5] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [6] Relative molecule self-attention transformer
    Maziarka, Lukasz
    Majchrowski, Dawid
    Danel, Tomasz
    Gainski, Piotr
    Tabor, Jacek
    Podolak, Igor
    Morkisz, Pawel
    Jastrzebski, Stanislaw
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [7] Spectral-Spatial Self-Attention Networks for Hyperspectral Image Classification
    Zhang, Xuming
    Sun, Genyun
    Jia, Xiuping
    Wu, Lixin
    Zhang, Aizhu
    Ren, Jinchang
    Fu, Hang
    Yao, Yanjuan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [8] WFSS: weighted fusion of spectral transformer and spatial self-attention for robust hyperspectral image classification against adversarial attacks
    Lichun Tang
    Zhaoxia Yin
    Hang Su
    Wanli Lyu
    Bin Luo
    Visual Intelligence, 2 (1):
  • [9] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [10] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145