Decoding Silent Reading EEG Signals Using Adaptive Feature Graph Convolutional Network

被引:4
作者
Li, Chengfang [1 ]
Fang, Gaoyun [1 ]
Liu, Yang [1 ,2 ]
Liu, Jing [1 ]
Song, Liang [1 ]
机构
[1] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
[2] Univ Toronto, Dept Comp Sci, Toronto, ON M5S 1A1, Canada
关键词
Electroencephalography; Decoding; Task analysis; Training; Adaptive systems; Symmetric matrices; Convolutional neural networks; Electroencephalography (EEG); silent reading; graph convolutional network; adaptive graph; language impairment; PERFORMANCE;
D O I
10.1109/LSP.2023.3337727
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Decoding silent reading Electroencephalography (EEG) signals is challenging because of its low signal-to-noise ratio. In addition, EEG signals are typically non-Euclidean structured, therefore merely using a two-dimensional matrix to represent the variation of sampling points of each channel in time cannot richly represent the spatial connection between channels. Furthermore, due to the individual differences in EEG signals, a fixed representation cannot adequately represent the temporal and spatial associations between channels in real time. In this letter, we use the feature matrix and its adaptive graph structure to represent each EEG signal. Then, we use them as inputs and propose a novel Adaptive Feature Graph Convolutional Network (AFGCN) to decode the silent reading EEG signals. We classify silent reading EEG signals under different tasks of 16 subjects from two publicly available datasets. The experimental results demonstrate that our proposed method achieves higher decoding accuracy than state-of-the-art EEG classification networks on both datasets. Among them, the highest classification accuracy for the four classes is 83.33%. The study could promote the application and development of BCI technology for silent reading EEG signal decoding. It can also provide an efficient and convenient communication method for patients with language impairment.
引用
收藏
页码:1 / 5
页数:5
相关论文
共 31 条
  • [1] Towards reconstructing intelligible speech from the human auditory cortex
    Akbari, Hassan
    Khalighinejad, Bahar
    Herrero, Jose L.
    Mehta, Ashesh D.
    Mesgarani, Nima
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [2] EEG signal classification of imagined speech based on Riemannian distance of correntropy spectral density
    Bakhshali, Mohamad Amin
    Khademi, Morteza
    Ebrahimi-Moghadam, Abbas
    Moghimi, Sahar
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 59
  • [3] Transcranial Direct Current Stimulation (tDCS) of Wernicke's and Broca's Areas in Studies of Language Learning and Word Acquisition
    Blagovechtchenski, Evgeny
    Gnedykh, Dana
    Kurmakaeva, Diana
    Mkrtychian, Nadezhda
    Kostromina, Svetlana
    Shtyrov, Yury
    [J]. JOVE-JOURNAL OF VISUALIZED EXPERIMENTS, 2019, (149):
  • [4] Deep learning for electroencephalogram (EEG) classification tasks: a review
    Craik, Alexander
    He, Yongtian
    Contreras-Vidal, Jose L.
    [J]. JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
  • [5] Single-trial classification of vowel speech imagery using common spatial patterns
    DaSalla, Charles S.
    Kambara, Hiroyuki
    Sato, Makoto
    Koike, Yasuharu
    [J]. NEURAL NETWORKS, 2009, 22 (09) : 1334 - 1339
  • [6] Language disorders in children with central nervous system injury
    Dennis, Maureen
    [J]. JOURNAL OF CLINICAL AND EXPERIMENTAL NEUROPSYCHOLOGY, 2010, 32 (04) : 417 - 432
  • [7] EEG-GCN: Spatio-Temporal and Self-Adaptive Graph Convolutional Networks for Single and Multi-View EEG-Based Emotion Recognition
    Gao, Yue
    Fu, Xiangling
    Ouyang, Tianxiong
    Wang, Yi
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1574 - 1578
  • [8] Multiclass support vector machines for EEG-signals classification
    Guler, Inan
    Ubeyli, Elif Derya
    [J]. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2007, 11 (02): : 117 - 126
  • [9] Jahangiri A, 2017, IEEE ENG MED BIO, P2093, DOI 10.1109/EMBC.2017.8037266
  • [10] Kipf T. N., 2017, P INT C LEARN REPR, P1