A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification

被引:105
|
作者
Xie, Jin [1 ,2 ,3 ]
Zhang, Jie [1 ,2 ,4 ]
Sun, Jiayao [1 ,3 ]
Ma, Zheng [1 ,3 ]
Qin, Liuni [1 ,2 ,3 ]
Li, Guanglin [1 ,5 ]
Zhou, Huihui [1 ,4 ]
Zhan, Yang [1 ,3 ,6 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518000, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[3] Shenzhen Fundamental Res Inst, Shenzhen Key Lab Translat Res Brain Dis, Shenzhen Hong Kong Inst Brain Sci, Shenzhen 518055, Peoples R China
[4] Peng Cheng Lab, Res Ctr Artificial Intelligence, Shenzhen 518066, Peoples R China
[5] CAS Key Lab Human Machine Intelligence Synergy Sy, Shenzhen 518055, Peoples R China
[6] CAS Key Lab Brain Connectome & Manipulat, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography; Transformers; Brain modeling; Task analysis; Feature extraction; Data models; Deep learning; Motor imagery (MI); EEG classification; transformer; attention mechanism; CNN; visualization; brain-computer interface (BCI);
D O I
10.1109/TNSRE.2022.3194600
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.
引用
收藏
页码:2126 / 2136
页数:11
相关论文
共 50 条
  • [1] A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification
    Xie, Jin
    Zhang, Jie
    Sun, Jiayao
    Ma, Zheng
    Qin, Liuni
    Li, Guanglin
    Zhou, Huihui
    Zhan, Yang
    IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 30 : 2126 - 2136
  • [2] A Transformer-Based Spatial-Temporal Sleep Staging Model Through Raw EEG
    Shi, Guang
    Chen, Zheng
    Zhang, Renyuan
    2021 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2021, : 110 - 115
  • [3] Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning
    Yao, Xiuzhen
    Li, Tianwen
    Ding, Peng
    Wang, Fan
    Zhao, Lei
    Gong, Anmin
    Nan, Wenya
    Fu, Yunfa
    BRAIN SCIENCES, 2024, 14 (03)
  • [4] A Hybrid Transformer-based Spatial-Temporal Network for Traffic Flow Prediction
    Tian, Guanqun
    Li, Dequan
    2024 IEEE 19TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, ICIEA 2024, 2024,
  • [5] Social Network Information Diffusion Prediction Based on Spatial-Temporal Transformer
    Fan W.
    Liu Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (08): : 1757 - 1769
  • [6] STILN: A novel spatial-temporal information learning network for EEG-based emotion recognition
    Tang, Yiheng
    Wang, Yongxiong
    Zhang, Xiaoli
    Wang, Zhe
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 85
  • [7] Transformer-Based Multimodal Spatial-Temporal Fusion for Gait Recognition
    Zhang, Jikai
    Ji, Mengyu
    He, Yihao
    Guo, Dongliang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XV, 2025, 15045 : 494 - 507
  • [8] Tourism demand forecasting: a deep learning model based on spatial-temporal transformer
    Chen, Jiaying
    Li, Cheng
    Huang, Liyao
    Zheng, Weimin
    TOURISM REVIEW, 2025, 80 (03) : 648 - 663
  • [9] A spatial and temporal transformer-based EEG emotion recognition in VR environment
    Li, Ming
    Yu, Peng
    Shen, Yang
    FRONTIERS IN HUMAN NEUROSCIENCE, 2025, 19
  • [10] Learning a spatial-temporal texture transformer network for video inpainting
    Ma, Pengsen
    Xue, Tao
    FRONTIERS IN NEUROROBOTICS, 2022, 16