Wavelet filterbank-based EEG rhythm-specific spatial features for covert speech classification

被引:7
作者
Biswas, Sukanya [1 ]
Sinha, Rohit [1 ]
机构
[1] Indian Inst Technol Guwahati, Dept Elect & Elect Engn, Gauhati 781039, Assam, India
关键词
IMAGERY; DIAGONALIZATION; PERCEPTION; ALGORITHM; REMOVAL;
D O I
10.1049/sil2.12059
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The derivation of rhythm-specific spatial patterns of electroencephalographic (EEG) signals for covert speech EEG classification task is dealt in this work. This study has been performed on a publicly accessible multi-channel covert speech EEG database consisting of multi-syllabic words. With the motivation of deriving more discriminative features, each channel data has been decomposed into distinct bands focussing on the five basic EEG rhythms using the discrete wavelet transform (DWT)-based signal decomposition algorithm. Following that, for each band, the multi-class common spatial pattern (CSP) features are computed using joint approximate diagonalisation. The final feature vector is formed by retaining a few significant CSP components from all five bands. Radial basis function kernel-based support vector machines are used for covert speech classification. After 5-fold cross-validation, the proposed DWT-based bandwise-CSP features are noted to yield an average classification accuracy of 94%. In contrast with the existing (non-decomposed) CSP feature, a relative improvement of about 24% is achieved. For generalisation purposes, the proposed approach has also been evaluated for another covert speech database comprising more classes and subjects. The study highlights the discovery of more discriminative patterns with rhythm-specific processing in the context of covert speech classification. The proposed approach has the potential to be useful in other brain-computer interface paradigms that employ EEG signals.
引用
收藏
页码:92 / 105
页数:14
相关论文
共 47 条
[1]  
Ang KK, 2008, IEEE IJCNN, P2390, DOI 10.1109/IJCNN.2008.4634130
[2]  
Balaji A, 2017, IEEE ENG MED BIO, P1022, DOI 10.1109/EMBC.2017.8037000
[3]  
Biswal S.R., 2018, P IEEE GLOB COMM C G, P1, DOI 10.1109/PEDES.2018.8707739
[4]   Scanning silence: Mental imagery of complex sounds [J].
Bunzeck, N ;
Wuestenberg, T ;
Lutz, K ;
Heinze, HJ ;
Jancke, L .
NEUROIMAGE, 2005, 26 (04) :1119-1127
[5]   Song and speech: Brain regions involved with perception and covert production [J].
Callan, Daniel E. ;
Tsytsarev, Vassilly ;
Hanakawa, Takashi ;
Callan, Akiko M. ;
Katsuhara, Maya ;
Fukuyama, Hidenao ;
Turner, Robert .
NEUROIMAGE, 2006, 31 (03) :1327-1342
[6]   Single-sweep EEG analysis of neural processes underlying perception and production of vowels [J].
Callan, DE ;
Callan, AM ;
Honda, K ;
Masaki, S .
COGNITIVE BRAIN RESEARCH, 2000, 10 (1-2) :173-176
[7]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[8]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[9]  
D'Zmura M, 2009, LECT NOTES COMPUT SC, V5610, P40, DOI 10.1007/978-3-642-02574-7_5
[10]   Single-trial classification of vowel speech imagery using common spatial patterns [J].
DaSalla, Charles S. ;
Kambara, Hiroyuki ;
Sato, Makoto ;
Koike, Yasuharu .
NEURAL NETWORKS, 2009, 22 (09) :1334-1339