Combined predictive effects of sentential and visual constraints in early audiovisual speech processing

被引:0
作者
Heidi Solberg Økland
Ana Todorović
Claudia S. Lüttke
James M. McQueen
Floris P. de Lange
机构
[1] Medical Research Council Cognition and Brain Sciences Unit,Oxford Centre for Human Brain Activity
[2] University of Cambridge,Donders Institute for Brain, Cognition and Behaviour
[3] University of Oxford,undefined
[4] Radboud University,undefined
[5] Max Planck Institute for Psycholinguistics,undefined
来源
Scientific Reports | / 9卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
引用
收藏
相关论文
共 198 条
  • [1] Peelle JE(2015)Prediction and constraint in audiovisual speech perception Cortex 68 169-181
  • [2] Sommers MS(2009)Incrementality and Prediction in Human Sentence Processing Cogn. Sci. 33 583-609
  • [3] Altmann GTM(2009)The Natural Statistics of Audiovisual Speech PLOS Comput. Biol. 5 e1000436-225
  • [4] Mirković J(2010)The temporal distribution of information in audiovisual spoken-word identification Atten. Percept. Psychophys. 72 209-1306
  • [5] Chandrasekaran C(2016)Quantifying lip-read-induced suppression and facilitation of the auditory N1 and P2 reveals peak enhancements and delays Psychophysiology 53 1295-75
  • [6] Trubanova A(2003)Electrophysiological indicators of phonetic and non-phonetic multisensory interactions during audiovisual speech perception Cogn. Brain Res. 18 65-1186
  • [7] Stillittano S(2005)Visual speech speeds up the neural processing of auditory speech Proc. Natl. Acad. Sci. USA 102 1181-801
  • [8] Caplier A(2011)Transitions in neural oscillations reflect prediction errors generated in audiovisual speech Nat. Neurosci. 14 797-43
  • [9] Ghazanfar AA(1987)Neuromagnetic responses of the human auditory cortex to on- and offsets of noise bursts Audiol. Off. Organ Int. Soc. Audiol. 26 31-346
  • [10] Jesse A(2009)Implicit and explicit categorization of speech sounds–dissociating behavioural and neurophysiological data Eur. J. Neurosci. 30 339-3846