Motor imagery task classification using spatial-time-frequency features of EEG signals: a deep learning approach for improved performance

被引:0
作者
Jishad, T. K. Muhamed [1 ]
Sudeep, P. V. [2 ]
Sanjay, M. [1 ]
机构
[1] Natl Inst Technol Calicut, Dept Elect Engn, Kozhikode 673601, Kerala, India
[2] Natl Inst Technol Calicut, Dept Elect & Commun Engn, Kozhikode 673601, Kerala, India
关键词
BCI; EEG; MI; Wavelet transform; Time-frequency representation; Convolutional neural networks; BRAIN-COMPUTER-INTERFACE; SINGLE-TRIAL EEG; COMMUNICATION;
D O I
10.1007/s12530-025-09696-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classification of electroencephalogram (EEG) signals according to the user-intended motor imagery (MI) task is crucial for effective brain-computer interfaces (BCIs). Current methods often encounter difficulties in attaining high classification accuracy. This study aims to improve accuracy by utilising spatial and time-frequency characteristics of multichannel EEG data using convolutional neural networks (CNN). EEG signals acquired from the sensory-motor region were subjected to time-frequency analysis, creating three-dimensional spatially informed time-frequency representations (SITFR). The CNN was trained and validated using SITFR matrices corresponding to four motor imagery tasks utilising the BCI Competition IV dataset IIa with a five-fold cross-validation technique. Gaussian noise data augmentation was applied to improve model robustness by increasing variability in EEG signals while preserving their structural integrity. Four time-frequency approaches, namely continuous wavelet transform (CWT), wavelet synchrosqueezed transform (WSST), Fourier synchrosqueezed transform (FSST) and synchroextracting transform (SET) were used for this experiment. The CNN model attained a mean test accuracy of 98.18% and kappa score of 0.98 for CWT-SITFR, outperforming other TFR methods. The accuracies obtained for FSST, WSST and SET were 97.47%, 94.38% and 91.82% with kappa scores of 0.97, 0.93 and 0.89 respectively. This approach enables the CNN to learn both time-frequency and spatial features, resulting in better performance compared with existing state-of-the-art techniques.
引用
收藏
页数:21
相关论文
共 57 条
[11]   MEG-based neurofeedback for hand rehabilitation [J].
Foldes, Stephen T. ;
Weber, Douglas J. ;
Collinger, Jennifer L. .
JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2015, 12
[12]   A Parallel Feature Fusion Network Combining GRU and CNN for Motor Imagery EEG Decoding [J].
Gao, Siheng ;
Yang, Jun ;
Shen, Tao ;
Jiang, Wen .
BRAIN SCIENCES, 2022, 12 (09)
[13]   A Sliding Window Common Spatial Pattern for Enhancing Motor Imagery Classification in EEG-BCI [J].
Gaur, Pramod ;
Gupta, Harsh ;
Chowdhury, Anirban ;
McCreadie, Karl ;
Pachori, Ram Bilas ;
Wang, Hui .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[14]  
Graimann B, 2010, FRONT COLLECT, P1, DOI 10.1007/978-3-642-02091-9_1
[15]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[16]   A Cross-Space CNN With Customized Characteristics for Motor Imagery EEG Classification [J].
Hu, Ying ;
Liu, Yan ;
Zhang, Siqi ;
Zhang, Ting ;
Dai, Bin ;
Peng, Bo ;
Yang, Hongbo ;
Dai, Yakang .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 :1554-1565
[17]   A Model Combining Multi Branch Spectral-Temporal CNN, Efficient Channel Attention, and LightGBM for MI-BCI Classification [J].
Jia, Hai ;
Yu, Shiqi ;
Yin, Shunjie ;
Yi, Chanlin ;
Liu, Lanxin ;
Xue, Kaiqing ;
Li, Fali ;
Yao, Dezhong ;
Xu, Peng ;
Zhang, Tao .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 :1311-1320
[18]  
Jishad TM., 2021, Analysis of medical modalities for improved diagnosis in modern healthcare, V1, P237, DOI [10.1201/9781003146810-11, DOI 10.1201/9781003146810-11]
[19]  
Kbler A., 2016, The neurology of conciousness, V217, P240, DOI [10.1016/B978-0-12-800948-2.00014-5, DOI 10.1016/B978-0-12-800948-2.00014-5]
[20]  
Kingma DP., 2014, P 2 INT C LEARN REPR