Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level

被引:2
作者
Kolozsvari, Orsolya B. [1 ,2 ]
Xu, Weiyong [1 ,2 ]
Leppanen, Paavo H. T. [1 ,2 ]
Hamalainen, Jarmo A. [1 ,2 ]
机构
[1] Univ Jyvaskyla, Dept Psychol, Jyvaskyla, Finland
[2] Univ Jyvaskyla, Jyvaskyla Ctr Interdisciplinary Brain Res CIBR, Jyvaskyla, Finland
基金
芬兰科学院;
关键词
speech perception; magnetoencephalography; audio-visual stimuli; audio-visual integration; familiarity; MULTISENSORY INTERACTIONS; VISUAL SPEECH; INTEGRATION; BRAIN; ACTIVATION; RESPONSES; FMRI; MEG; LOCALIZATION; INFORMATION;
D O I
10.3389/fnhum.2019.00243
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener's native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. They watched videos of a Chinese speaker pronouncing syllables (/pa/, /pha/, /ta/, /tha/, /fa/) during a magnetoencephalography (MEG) measurement where only /pa/ and /ta/ were part of Finnish phonology while all the stimuli were part of Chinese phonology. The stimuli were presented in audio-visual (congruent or incongruent), audio only, or visual only conditions. The brain responses were examined in five time-windows: 75-125, 150-200, 200-300, 300-400, and 400-600 ms. We found significant differences for the congruency comparison in the fourth time-window (300-400 ms) in both sensor and source level analysis. Larger responses were observed for the incongruent stimuli than for the congruent stimuli. For the familiarity comparisons no significant differences were found. The results are in line with earlier studies reporting on the modulation of brain responses for audio-visual congruency around 250-500 ms. This suggests a much stronger process for the general detection of a mismatch between predictions based on lip movements and the auditory signal than for the top-down modulation of brain responses based on phonological information.
引用
收藏
页数:11
相关论文
共 48 条
[41]   Top-down task effects overrule automatic multisensory responses to letter-sound pairs in auditory association cortex [J].
van Atteveldt, Nienke M. ;
Formisano, Ella ;
Goebel, Rainer ;
Blomert, Leo .
NEUROIMAGE, 2007, 36 (04) :1345-1360
[42]   Visual speech speeds up the neural processing of auditory speech [J].
van Wassenhove, V ;
Grant, KW ;
Poeppel, D .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2005, 102 (04) :1181-1186
[43]   Cortical processing of change detection: Dissociation between natural vowels and two-frequency complex tones [J].
Vihla, M ;
Lounasmaa, OV ;
Salmelin, R .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2000, 97 (19) :10590-10594
[44]   Visual Anticipatory Information Modulates Multisensory Interactions of Artificial Audiovisual Stimuli [J].
Vroomen, Jean ;
Stekelenburg, Jeroen J. .
JOURNAL OF COGNITIVE NEUROSCIENCE, 2010, 22 (07) :1583-1596
[45]   Brain responses reveal the learning of foreign language phonemes [J].
Winkler, I ;
Kujala, T ;
Tiitinen, H ;
Sivonen, P ;
Alku, P ;
Lehtokoski, A ;
Czigler, I ;
Csépe, V ;
Ilmoniemi, RJ ;
Näätänen, R .
PSYCHOPHYSIOLOGY, 1999, 36 (05) :638-642
[46]   Audiovisual Processing of Chinese Characters Elicits Suppression and Congruency Effects in MEG [J].
Xu, Weiyong ;
Kolozsvari, Orsolya Beatrix ;
Oostenveld, Robert ;
Leppanen, Paavo Herman Tapio ;
Hamalainen, Jarmo Arvid .
FRONTIERS IN HUMAN NEUROSCIENCE, 2019, 13
[47]   Effects of language experience: Neural commitment to language-specific auditory patterns [J].
Zhang, Y ;
Kuhl, PK ;
Imada, T ;
Kotani, M ;
Tohkura, Y .
NEUROIMAGE, 2005, 26 (03) :703-720
[48]   Neural signatures of phonetic learning in adulthood: A magnetoencephalography study [J].
Zhang, Yang ;
Kuhl, Patricia K. ;
Imada, Toshiaki ;
Iverson, Paul ;
Pruitt, John ;
Stevens, Erica B. ;
Kawakatsu, Masaki ;
Tohkura, Yoh'ichi ;
Nemoto, Iku .
NEUROIMAGE, 2009, 46 (01) :226-240