Representation of Articulatory Features in EEG During Speech Production Tasks

被引:0
作者
Sun, Sinan [1 ,3 ,4 ]
Zhang, Longxiang [1 ,2 ]
Wang, Bo [1 ,2 ]
Wu, Xihong [1 ,2 ]
Chen, Jing [1 ,2 ,3 ,4 ]
机构
[1] Peking Univ, Natl Key Lab Gen Artificial Intelligence, Beijing, Peoples R China
[2] Peking Univ, Sch Intelligence Sci & Technol, Speech & Hearing Res Ctr, Beijing, Peoples R China
[3] Peking Univ, Ctr BioMed X Res, Acad Adv Interdisciplinary Studies, Beijing, Peoples R China
[4] Peking Univ, Coll Future Technol, Natl Biomed Imaging Ctr, Beijing, Peoples R China
来源
2024 IEEE 14TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING, ISCSLP 2024 | 2024年
关键词
speech production; articulatory feature; EEG;
D O I
10.1109/ISCSLP63861.2024.10800095
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Investigating the representation of various speech features in EEG signals is crucial for advancing non-invasive speech decoding. In this study, we compared the EEG representation of two commonly used speech features in speech decoding: articulatory and acoustic features. To achieve this, we collected EEG data from 8 Mandarin-speaking participants while they performed both overt speech and imagined speech tasks. Linear encoding and decoding models were constructed to bridge the connection between EEG signals and speech features. The decoding models for predicting articulatory features demonstrated a better representation of articulatory features in EEG compared to traditional acoustic features. Additionally, scalp topographies of entrainment strength obtained by encoding models revealed strong representation of articulatory features in electrodes of the parietal motor area. These findings will contribute to the further application of articulatory features in EEG-based neural speech decoding.
引用
收藏
页码:219 / 223
页数:5
相关论文
共 31 条
[1]   Speech synthesis from ECoG using densely connected 3D convolutional neural networks [J].
Angrick, Miguel ;
Herff, Christian ;
Mugler, Emily ;
Tate, Matthew C. ;
Slutzky, Marc W. ;
Krusienski, Dean J. ;
Schultz, Tanja .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
[2]   Speech synthesis from neural decoding of spoken sentences [J].
Anumanchipalli, Gopala K. ;
Chartier, Josh ;
Chang, Edward F. .
NATURE, 2019, 568 (7753) :493-+
[3]   Modeling Consonant-Vowel Coarticulation for Articulatory Speech Synthesis [J].
Birkholz, Peter .
PLOS ONE, 2013, 8 (04)
[4]  
Boersma P., 2018, Glot International
[5]   Functional organization of human sensorimotor cortex for speech articulation [J].
Bouchard, Kristofer E. ;
Mesgarani, Nima ;
Johnson, Keith ;
Chang, Edward F. .
NATURE, 2013, 495 (7441) :327-332
[6]   Encoding of Articulatory Kinematic Trajectories in Human Speech Sensorimotor Cortex [J].
Chartier, Josh ;
Anumanchipalli, Gopala K. ;
Johnson, Keith ;
Chang, Edward F. .
NEURON, 2018, 98 (05) :1042-+
[7]   A neural speech decoding framework leveraging deep learning and speech synthesis [J].
Chen, Xupeng ;
Wang, Ran ;
Khalilian-Gourtani, Amirhossein ;
Yu, Leyao ;
Dugan, Patricia ;
Friedman, Daniel ;
Doyle, Werner ;
Devinsky, Orrin ;
Wang, Yao ;
Flinker, Adeen .
NATURE MACHINE INTELLIGENCE, 2024, 6 (04) :467-480
[8]   A SPREADING-ACTIVATION THEORY OF RETRIEVAL IN SENTENCE PRODUCTION [J].
DELL, GS .
PSYCHOLOGICAL REVIEW, 1986, 93 (03) :283-321
[9]  
Deller J. R., 1993, DISCRETE TIME PROCES
[10]  
Ertas F., 2011, Pamukkale U niversitesi Muhendislik Bilimleri Dergisi, V6