Imagined Speech Classification Using EEG and Deep Learning

被引:7
|
作者
Abdulghani, Mokhles M. [1 ]
Walters, Wilbur L. [1 ]
Abed, Khalid H. [1 ]
机构
[1] Jackson State Univ, Coll Sci Engn & Technol, Dept Elect & Comp Engn & Comp Sci, Jackson, MS 39217 USA
来源
BIOENGINEERING-BASEL | 2023年 / 10卷 / 06期
关键词
inner speech; imagined speech; EEG decoding; brain-computer interface (BCI); LSTM; wavelet scattering transformation (WST);
D O I
10.3390/bioengineering10060649
中图分类号
Q81 [生物工程学(生物技术)]; Q93 [微生物学];
学科分类号
071005 ; 0836 ; 090102 ; 100705 ;
摘要
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain-computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Classification of group speech imagined EEG signals based on attention mechanism and deep learning
    Zhou, Yifan
    Zhang, Lingwei
    Zhou, Zhengdong
    Cai, Zhi
    Yuan, Mengyao
    Yuan, Xiaoxi
    Yang, Zeyi
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2024, 58 (12): : 2540 - 2546
  • [2] Word-Based Classification of Imagined Speech Using EEG
    Hashim, Noramiza
    Ali, Aziah
    Mohd-Isa, Wan-Noorshahida
    COMPUTATIONAL SCIENCE AND TECHNOLOGY, ICCST 2017, 2018, 488 : 195 - 204
  • [3] Multi-view Learning for EEG Signal Classification of Imagined Speech
    Barajas Montiel, Sandra Eugenia
    Morales, Eduardo F.
    Jair Escalante, Hugo
    PATTERN RECOGNITION, MCPR 2022, 2022, 13264 : 191 - 200
  • [4] Hierarchical Deep Feature Learning for Decoding Imagined Speech from EEG
    Saha, Pramit
    Fels, Sidney
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10019 - 10020
  • [5] Inner Speech Classification using EEG Signals: A Deep Learning Approach
    Van den Berg, Bram
    Van Donkelaar, Sander
    Alimardani, Maryam
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2021, : 258 - 261
  • [6] Multiclass Classification of Imagined Speech Vowels and Words of Electroencephalography Signals Using Deep Learning
    Mahapatra, Nrushingh Charan
    Bhuyan, Prachet
    ADVANCES IN HUMAN-COMPUTER INTERACTION, 2022, 2022
  • [7] Decoding Imagined Speech From EEG Using Transfer Learning
    Panachakel, Jerrin Thomas
    Ganesan, Ramakrishnan Angarai
    IEEE ACCESS, 2021, 9 : 135371 - 135383
  • [8] Classification of Imagined Speech EEG Signals with DWT and SVM
    ZHANG Lingwei
    ZHOU Zhengdong
    XU Yunfei
    JI Wentao
    WANG Jiawen
    SONG Zefeng
    Instrumentation, 2022, 9 (02) : 56 - 63
  • [9] Vowel Classification from Imagined Speech Using Sub-band EEG frequencies and Deep Belief Networks
    Sree, R. Anandha
    Kavitha, A.
    2017 FOURTH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATION AND NETWORKING (ICSCN), 2017,
  • [10] Classification of Imagined and Heard Speech Using Amplitude Spectrum and Relative Phase of EEG
    Sakai, Ryota
    Kai, Atsuhiko
    Nakagawa, Seiichi
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 373 - 375