Decoding Imagined and Spoken Phrases From Non-invasive Neural (MEG) Signals

被引:69
作者
Dash, Debadatta [1 ,2 ]
Ferrari, Paul [3 ,4 ]
Wang, Jun [2 ,5 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[2] Univ Texas Austin, Med Sch, Dept Neurol, Austin, TX 78712 USA
[3] Childrens Med Ctr, MEG Lab, Austin, TX USA
[4] Univ Texas Austin, Dept Psychol, Austin, TX 78712 USA
[5] Univ Texas Austin, Dept Commun Sci & Disorders, Austin, TX 78712 USA
基金
美国国家卫生研究院;
关键词
MEG; speech; brain-computer interface; wavelet; convolutional neural network; neural technology; SPEECH; BRAIN; FMRI; MAGNETOENCEPHALOGRAPHY; CLASSIFICATION; IDENTIFICATION; NETWORKS; MODELS;
D O I
10.3389/fnins.2020.00290
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Speech production is a hierarchical mechanism involving the synchronization of the brain and the oral articulators, where the intention of linguistic concepts is transformed into meaningful sounds. Individuals with locked-in syndrome (fully paralyzed but aware) lose their motor ability completely including articulation and even eyeball movement. The neural pathway may be the only option to resume a certain level of communication for these patients. Current brain-computer interfaces (BCIs) use patients' visual and attentional correlates to build communication, resulting in a slow communication rate (a few words per minute). Direct decoding of imagined speech from the neural signals (and then driving a speech synthesizer) has the potential for a higher communication rate. In this study, we investigated the decoding of five imagined and spoken phrases from single-trial, non-invasive magnetoencephalography (MEG) signals collected from eight adult subjects. Two machine learning algorithms were used. One was an artificial neural network (ANN) with statistical features as the baseline approach. The other was convolutional neural networks (CNNs) applied on the spatial, spectral and temporal features extracted from the MEG signals. Experimental results indicated the possibility to decode imagined and spoken phrases directly from neuromagnetic signals. CNNs were found to be highly effective with an average decoding accuracy of up to 93% for the imagined and 96% for the spoken phrases.
引用
收藏
页数:15
相关论文
共 79 条
[1]   Cerebellar contributions to speech production and speech perception: psycholinguistic and neurobiological perspectives [J].
Ackermann, Hermann .
TRENDS IN NEUROSCIENCES, 2008, 31 (06) :265-272
[2]   Cortical high gamma network oscillations and connectivity: a translational index for antipsychotics to normalize aberrant neurophysiological activity [J].
Ahnaou, A. ;
Huysmans, H. ;
Van de Casteele, T. ;
Drinkenburg, W. H. I. M. .
TRANSLATIONAL PSYCHIATRY, 2017, 7
[3]   Towards reconstructing intelligible speech from the human auditory cortex [J].
Akbari, Hassan ;
Khalighinejad, Bahar ;
Herrero, Jose L. ;
Mehta, Ashesh D. ;
Mesgarani, Nima .
SCIENTIFIC REPORTS, 2019, 9 (1)
[4]   Sensitivity to the temporal structure of rapid sound sequences - An MEG study [J].
Andreou, Lefkothea-Vasiliki ;
Griffiths, Timothy D. ;
Chait, Maria .
NEUROIMAGE, 2015, 110 :194-204
[5]   Speech synthesis from ECoG using densely connected 3D convolutional neural networks [J].
Angrick, Miguel ;
Herff, Christian ;
Mugler, Emily ;
Tate, Matthew C. ;
Slutzky, Marc W. ;
Krusienski, Dean J. ;
Schultz, Tanja .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
[6]  
[Anonymous], 2011, 22 INT JT C ART INT, DOI 10.5555/2283516.2283603
[7]   Speech synthesis from neural decoding of spoken sentences [J].
Anumanchipalli, Gopala K. ;
Chartier, Josh ;
Chang, Edward F. .
NATURE, 2019, 568 (7753) :493-+
[8]   Brain-computer-interface research: Coming of age [J].
Birbaumer, N .
CLINICAL NEUROPHYSIOLOGY, 2006, 117 (03) :479-483
[9]   Moving magnetoencephalography towards real-world applications with a wearable system [J].
Boto, Elena ;
Holmes, Niall ;
Leggett, James ;
Roberts, Gillian ;
Shah, Vishal ;
Meyer, Sofie S. ;
Munoz, Leonardo Duque ;
Mullinger, Karen J. ;
Tierney, Tim M. ;
Bestmann, Sven ;
Barnes, Gareth R. ;
Bowtell, Richard ;
Brookes, Matthew J. .
NATURE, 2018, 555 (7698) :657-+
[10]  
Brigham K, 2010, INT CONF BIOINFORM