Decoding speech perception from non-invasive brain recordings

被引:61
作者
Defossez, Alexandre [1 ]
Caucheteux, Charlotte [1 ,2 ]
Rapin, Jeremy [1 ]
Kabeli, Ori [3 ]
King, Jean-Remi [1 ,4 ]
机构
[1] Meta AI, Paris, France
[2] INRIA Saclay, Saclay, France
[3] Meta AI, Tel Aviv, Israel
[4] PSL Univ, Ecole Normale Super, Dept Etud Cognit, LSP, Paris, France
关键词
CONVOLUTIONAL NEURAL-NETWORKS; MAGNETOENCEPHALOGRAPHY; COMMUNICATION; AWARENESS;
D O I
10.1038/s42256-023-00714-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decoding speech from brain activity is a long-awaited goal in both healthcare and neuroscience. Invasive devices have recently led to major milestones in this regard: deep-learning algorithms trained on intracranial recordings can now start to decode elementary linguistic features such as letters, words and audio-spectrograms. However, extending this approach to natural speech and non-invasive brain recordings remains a major challenge. Here we introduce a model trained with contrastive learning to decode self-supervised representations of perceived speech from the non-invasive recordings of a large cohort of healthy individuals. To evaluate this approach, we curate and integrate four public datasets, encompassing 175 volunteers recorded with magneto-encephalography or electro-encephalography while they listened to short stories and isolated sentences. The results show that our model can identify, from 3 seconds of magneto-encephalography signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities on average across participants, and with up to 80% in the best participants-a performance that allows the decoding of words and phrases absent from the training set. The comparison of our model with a variety of baselines highlights the importance of a contrastive objective, pretrained representations of speech and a common convolutional architecture simultaneously trained across multiple participants. Finally, the analysis of the decoder's predictions suggests that they primarily depend on lexical and contextual semantic representations. Overall, this effective decoding of perceived speech from non-invasive recordings delineates a promising path to decode language from brain activity, without putting patients at risk of brain surgery. Deep learning can help develop non-invasive technology for decoding speech from brain activity, which could improve the lives of patients with brain injuries. Defossez et al. report a contrastive-learning approach to decode speech listening from human participants, using public databases of recordings based on non-invasive magnetic and electrical measurements.
引用
收藏
页码:1097 / +
页数:16
相关论文
共 93 条
[1]   Successes and critical failures of neural networks in capturing human-like speech recognition [J].
Adolfi, Federico ;
Bowers, Jeffrey S. ;
Poeppel, David .
NEURAL NETWORKS, 2023, 162 :199-211
[2]  
Affolter N, 2020, Arxiv, DOI [arXiv:2009.04765, 10.48550/arXiv.2009.04765]
[3]   Towards reconstructing intelligible speech from the human auditory cortex [J].
Akbari, Hassan ;
Khalighinejad, Bahar ;
Herrero, Jose L. ;
Mehta, Ashesh D. ;
Mesgarani, Nima .
SCIENTIFIC REPORTS, 2019, 9 (1)
[4]   Enhancing the decoding accuracy of EEG signals by the introduction of anchored-STFT and adversarial data augmentation method [J].
Ali, Omair ;
Saif-ur-Rehman, Muhammad ;
Dyck, Susanne ;
Glasmachers, Tobias ;
Iossifidis, Ioannis ;
Klaes, Christian .
SCIENTIFIC REPORTS, 2022, 12 (01)
[5]   Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity [J].
Angrick, Miguel ;
Ottenhoff, Maarten C. ;
Diener, Lorenz ;
Ivucic, Darius ;
Ivucic, Gabriel ;
Goulis, Sophocles ;
Saal, Jeremy ;
Colon, Albert J. ;
Wagner, Louis ;
Krusienski, Dean J. ;
Kubben, Pieter L. ;
Schultz, Tanja ;
Herff, Christian .
COMMUNICATIONS BIOLOGY, 2021, 4 (01)
[6]   Interpretation of convolutional neural networks for speech spectrogram regression from intracranial recordings [J].
Angrick, Miguel ;
Herff, Christian ;
Johnson, Garett ;
Shih, Jerry ;
Krusienski, Dean ;
Schultz, Tanja .
NEUROCOMPUTING, 2019, 342 :145-151
[7]   Speech synthesis from ECoG using densely connected 3D convolutional neural networks [J].
Angrick, Miguel ;
Herff, Christian ;
Mugler, Emily ;
Tate, Matthew C. ;
Slutzky, Marc W. ;
Krusienski, Dean J. ;
Schultz, Tanja .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
[8]  
[Anonymous], 2010, P ACL 2010 C SHORT P
[9]   Speech synthesis from neural decoding of spoken sentences [J].
Anumanchipalli, Gopala K. ;
Chartier, Josh ;
Chang, Edward F. .
NATURE, 2019, 568 (7753) :493-+
[10]  
Baevski A, 2020, ADV NEUR IN, V33