A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification

被引:108
|
作者
Xie, Jin [1 ,2 ,3 ]
Zhang, Jie [1 ,2 ,4 ]
Sun, Jiayao [1 ,3 ]
Ma, Zheng [1 ,3 ]
Qin, Liuni [1 ,2 ,3 ]
Li, Guanglin [1 ,5 ]
Zhou, Huihui [1 ,4 ]
Zhan, Yang [1 ,3 ,6 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518000, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[3] Shenzhen Fundamental Res Inst, Shenzhen Key Lab Translat Res Brain Dis, Shenzhen Hong Kong Inst Brain Sci, Shenzhen 518055, Peoples R China
[4] Peng Cheng Lab, Res Ctr Artificial Intelligence, Shenzhen 518066, Peoples R China
[5] CAS Key Lab Human Machine Intelligence Synergy Sy, Shenzhen 518055, Peoples R China
[6] CAS Key Lab Brain Connectome & Manipulat, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography; Transformers; Brain modeling; Task analysis; Feature extraction; Data models; Deep learning; Motor imagery (MI); EEG classification; transformer; attention mechanism; CNN; visualization; brain-computer interface (BCI);
D O I
10.1109/TNSRE.2022.3194600
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.
引用
收藏
页码:2126 / 2136
页数:11
相关论文
共 50 条
  • [31] Speed Imagery EEG Classification with Spatial-temporal Feature Attention Deep Neural Networks
    Hao, Xiaoqian
    Sun, Biao
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 3438 - 3442
  • [32] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Xiaoyan Tian
    Ye Jin
    Zhao Zhang
    Peng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 44273 - 44297
  • [33] Multidomain transformer-based deep learning for early detection of network intrusion
    Liu, Jinxin
    Simsek, Murat
    Nogueira, Michele
    Kantarci, Burak
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6056 - 6061
  • [34] Spatial-temporal graph transformer network for skeleton-based temporal action segmentation
    Tian, Xiaoyan
    Jin, Ye
    Zhang, Zhao
    Liu, Peng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44273 - 44297
  • [35] Multiscale spatial-temporal transformer with consistency representation learning for multivariate time series classification
    Wu, Wei
    Qiu, Feiyue
    Wang, Liping
    Liu, Yanxiu
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (27)
  • [36] A Deep Learning Approach for Crop Disease and Pest Classification Using Swin Transformer and Dual-Attention Multi-Scale Fusion Network
    Karthik, R.
    Ajay, Armaano
    Singh Bisht, Akshaj
    Illakiya, T.
    Suganthi, K.
    IEEE ACCESS, 2024, 12 : 152639 - 152655
  • [37] Learning continuous dynamic network representation with transformer-based temporal graph neural network
    Li, Yingji
    Wu, Yue
    Sun, Mingchen
    Yang, Bo
    Wang, Ying
    INFORMATION SCIENCES, 2023, 649
  • [38] Traffic Prediction on Communication Network based on Spatial-Temporal Information
    Ma, Yue
    Peng, Bo
    Ma, Mingjun
    Wang, Yifei
    Xia, Ding
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 304 - 309
  • [39] CRAT: Advanced transformer-based deep learning algorithms in OCT image classification
    Yang, Mingming
    Du, Junhui
    Lv, Ruichan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 104
  • [40] On Extracting the Spatial-Temporal Features of Network Traffic Patterns: A Tensor Based Deep Learning Model
    Tang, Fengxiao
    Mao, Bomin
    Fadlullah, Zubair Md.
    Liu, Jiajia
    Kato, Nei
    PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC), 2018, : 445 - 451