A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification

被引:105
|
作者
Xie, Jin [1 ,2 ,3 ]
Zhang, Jie [1 ,2 ,4 ]
Sun, Jiayao [1 ,3 ]
Ma, Zheng [1 ,3 ]
Qin, Liuni [1 ,2 ,3 ]
Li, Guanglin [1 ,5 ]
Zhou, Huihui [1 ,4 ]
Zhan, Yang [1 ,3 ,6 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518000, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[3] Shenzhen Fundamental Res Inst, Shenzhen Key Lab Translat Res Brain Dis, Shenzhen Hong Kong Inst Brain Sci, Shenzhen 518055, Peoples R China
[4] Peng Cheng Lab, Res Ctr Artificial Intelligence, Shenzhen 518066, Peoples R China
[5] CAS Key Lab Human Machine Intelligence Synergy Sy, Shenzhen 518055, Peoples R China
[6] CAS Key Lab Brain Connectome & Manipulat, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography; Transformers; Brain modeling; Task analysis; Feature extraction; Data models; Deep learning; Motor imagery (MI); EEG classification; transformer; attention mechanism; CNN; visualization; brain-computer interface (BCI);
D O I
10.1109/TNSRE.2022.3194600
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.
引用
收藏
页码:2126 / 2136
页数:11
相关论文
共 50 条
  • [21] MAE-EEG-Transformer: A transformer-based approach combining masked autoencoder and cross-individual data augmentation pre-training for EEG classification
    Cai, Miao
    Zeng, Yu
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [22] Spatial-Temporal Dynamic Hypergraph Information Bottleneck for Brain Network Classification
    Dong, Changxu
    Sun, Dengdi
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2024, 34 (10)
  • [23] Transformer-based ensemble deep learning model for EEG-based emotion recognition
    Xiaopeng Si
    Dong Huang
    Yulin Sun
    Shudi Huang
    He Huang
    Dong Ming
    Brain Science Advances, 2023, 9 (03) : 210 - 223
  • [24] Graph Attention Spatial-Temporal Network for Deep Learning Based Mobile Traffic Prediction
    He, Kaiwen
    Huang, Yufen
    Chen, Xu
    Zhou, Zhi
    Yu, Shuai
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [25] Speed Imagery EEG Classification with Spatial-temporal Feature Attention Deep Neural Networks
    Hao, Xiaoqian
    Sun, Biao
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 3438 - 3442
  • [26] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Tian, Xiaoyan
    Jin, Ye
    Zhang, Zhao
    Liu, Peng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44273 - 44297
  • [27] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Xiaoyan Tian
    Ye Jin
    Zhao Zhang
    Peng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 44273 - 44297
  • [28] Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs
    SHENG Chen
    WANG Lin
    HUANG Zhenhuan
    WANG Tian
    GUO Yalin
    HOU Wenjie
    XU Laiqing
    WANG Jiazhu
    YAN Xue
    Journal of Systems Science & Complexity, 2023, 36 (01) : 257 - 272
  • [29] Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs
    Sheng Chen
    Wang Lin
    Huang Zhenhuan
    Wang Tian
    Guo Yalin
    Hou Wenjie
    Xu Laiqing
    Wang Jiazhu
    Yan Xue
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2023, 36 (01) : 257 - 272
  • [30] Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs
    Chen Sheng
    Lin Wang
    Zhenhuan Huang
    Tian Wang
    Yalin Guo
    Wenjie Hou
    Laiqing Xu
    Jiazhu Wang
    Xue Yan
    Journal of Systems Science and Complexity, 2023, 36 : 257 - 272