Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity

被引:9
|
作者
Stekelenburg, Jeroen J. [1 ]
Keetels, Mirjam [1 ]
Vroomen, Jean [1 ]
机构
[1] Tilburg Univ, Dept Cognit Neuropsychol, Warandelaan 2,POB 90153, NL-5000 LE Tilburg, Netherlands
关键词
event-related potentials; McGurk-MMN; text-sound integration; visual speech-sound integration; AUDIOVISUAL INTEGRATION; MMN; RECALIBRATION;
D O I
10.1111/ejn.13908
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech.
引用
收藏
页码:1135 / 1145
页数:11
相关论文
共 50 条
  • [1] Mismatch Negativity with Visual-only and Audiovisual Speech
    Ponton, Curtis W.
    Bernstein, Lynne E.
    Auer, Edward T., Jr.
    BRAIN TOPOGRAPHY, 2009, 21 (3-4) : 207 - 215
  • [2] Mismatch Negativity with Visual-only and Audiovisual Speech
    Curtis W. Ponton
    Lynne E. Bernstein
    Edward T. Auer
    Brain Topography, 2009, 21 : 207 - 215
  • [3] The visual mismatch negativity elicited with visual speech stimuli
    Files, Benjamin T.
    Auer, Edward T., Jr.
    Bernstein, Lynne E.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2013, 7
  • [4] The effect of temporal asynchrony on the multisensory integration of letters and speech sounds
    van Atteveldt, Nienke M.
    Formisano, Elia
    Blomert, Leo
    Goebel, Rainer
    CEREBRAL CORTEX, 2007, 17 (04) : 962 - 974
  • [5] Temporal integration of segmented-speech sounds probed with mismatch negativity
    Asano, Satoko
    Shiga, Tetsuya
    Itagaki, Shuntaro
    Yabe, Hirooki
    NEUROREPORT, 2015, 26 (17) : 1061 - 1064
  • [6] VISUAL CLASSIFICATION OF SPEECH SOUNDS
    KACZMAREK, BLJ
    CLINICAL LINGUISTICS & PHONETICS, 1990, 4 (03) : 247 - 252
  • [7] Mismatch negativity for speech sounds in temporal lobe epilepsy
    Hara, Keiko
    Ohta, Katsuya
    Miyajima, Miho
    Hara, Minoru
    Iino, Hiroko
    Matsuda, Ayasa
    Watanabe, Satsuki
    Matsushima, Eisuke
    Maehara, Taketoshi
    Matsuura, Masato
    EPILEPSY & BEHAVIOR, 2012, 23 (03) : 335 - 341
  • [8] Auditory hallucinations and the mismatch negativity: Processing speech and non-speech sounds in schizophrenia
    Fisher, Derek J.
    Labelle, Alain
    Knott, Verner J.
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2008, 70 (01) : 3 - 15
  • [9] Effects of audio-visual integration on the detection of masked speech and non-speech sounds
    Eramudugolla, Ranmalee
    Henderson, Rachel
    Mattingley, Jason B.
    BRAIN AND COGNITION, 2011, 75 (01) : 60 - 66
  • [10] THE INTERVALGRAM AS A VISUAL PRESENTATION OF SPEECH SOUNDS
    CHANG, SH
    PIHL, GE
    WIREN, J
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1951, 23 (05): : 632 - 632