Motor imagery task classification using spatial-time-frequency features of EEG signals: a deep learning approach for improved performance

被引:0
作者
Jishad, T. K. Muhamed [1 ]
Sudeep, P. V. [2 ]
Sanjay, M. [1 ]
机构
[1] Natl Inst Technol Calicut, Dept Elect Engn, Kozhikode 673601, Kerala, India
[2] Natl Inst Technol Calicut, Dept Elect & Commun Engn, Kozhikode 673601, Kerala, India
关键词
BCI; EEG; MI; Wavelet transform; Time-frequency representation; Convolutional neural networks; BRAIN-COMPUTER-INTERFACE; SINGLE-TRIAL EEG; COMMUNICATION;
D O I
10.1007/s12530-025-09696-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classification of electroencephalogram (EEG) signals according to the user-intended motor imagery (MI) task is crucial for effective brain-computer interfaces (BCIs). Current methods often encounter difficulties in attaining high classification accuracy. This study aims to improve accuracy by utilising spatial and time-frequency characteristics of multichannel EEG data using convolutional neural networks (CNN). EEG signals acquired from the sensory-motor region were subjected to time-frequency analysis, creating three-dimensional spatially informed time-frequency representations (SITFR). The CNN was trained and validated using SITFR matrices corresponding to four motor imagery tasks utilising the BCI Competition IV dataset IIa with a five-fold cross-validation technique. Gaussian noise data augmentation was applied to improve model robustness by increasing variability in EEG signals while preserving their structural integrity. Four time-frequency approaches, namely continuous wavelet transform (CWT), wavelet synchrosqueezed transform (WSST), Fourier synchrosqueezed transform (FSST) and synchroextracting transform (SET) were used for this experiment. The CNN model attained a mean test accuracy of 98.18% and kappa score of 0.98 for CWT-SITFR, outperforming other TFR methods. The accuracies obtained for FSST, WSST and SET were 97.47%, 94.38% and 91.82% with kappa scores of 0.97, 0.93 and 0.89 respectively. This approach enables the CNN to learn both time-frequency and spatial features, resulting in better performance compared with existing state-of-the-art techniques.
引用
收藏
页数:21
相关论文
共 57 条
[1]   Deep learning for motor imagery EEG-based classification: A review [J].
Al-Saegh, Ali ;
Dawwd, Shefa A. ;
Abdul-Jabbar, Jassim M. .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 63
[2]   Multi-classification for EEG motor imagery signals using data evaluation-based auto-selected regularized FBCSP and convolutional neural network [J].
An, Yang ;
Lam, Hak Keung ;
Ling, Sai Ho .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (16) :12001-12027
[3]   Time-Frequency Reassignment and Synchrosqueezing [J].
Auger, Francois ;
Flandrin, Patrick ;
Lin, Yu-Ting ;
McLaughlin, Stephen ;
Meignen, Sylvain ;
Oberlin, Thomas ;
Wu, Hau-Tieng .
IEEE SIGNAL PROCESSING MAGAZINE, 2013, 30 (06) :32-41
[4]  
Brunner C, 2008, TRANS DISTRIB CONF, P1342
[5]   Convolutional Neural Network Based Approach Towards Motor Imagery Tasks EEG Signals Classification [J].
Chaudhary, Shalu ;
Taran, Sachin ;
Bajaj, Varun ;
Sengur, Abdulkadir .
IEEE SENSORS JOURNAL, 2019, 19 (12) :4494-4500
[6]   Brain-computer interfaces for communication and rehabilitation [J].
Chaudhary, Ujwal ;
Birbaumer, Niels ;
Ramos-Murguialday, Ander .
NATURE REVIEWS NEUROLOGY, 2016, 12 (09) :513-525
[7]   A better way to define and describe Morlet wavelets for time-frequency analysis [J].
Cohen, Michael X. .
NEUROIMAGE, 2019, 199 :81-86
[8]   Deep learning for electroencephalogram (EEG) classification tasks: a review [J].
Craik, Alexander ;
He, Yongtian ;
Contreras-Vidal, Jose L. .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
[9]   EEG Classification of Motor Imagery Using a Novel Deep Learning Framework [J].
Dai, Mengxi ;
Zheng, Dezhi ;
Na, Rui ;
Wang, Shuai ;
Zhang, Shuailei .
SENSORS, 2019, 19 (03)
[10]   Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool [J].
Daubechies, Ingrid ;
Lu, Jianfeng ;
Wu, Hau-Tieng .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2011, 30 (02) :243-261