Time Course of Early Audiovisual Interactions during Speech and Nonspeech Central Auditory Processing: A Magnetoencephalography Study

被引:10
|
作者
Hertrich, Ingo [1 ]
Mathiak, Klaus [2 ]
Lutzenberger, Werner
Ackermann, Hermann
机构
[1] Univ Tubingen, Dept Gen Neurol, D-72076 Tubingen, Germany
[2] Rhein Westfal TH Aachen, Aachen, Germany
关键词
EVOKED MAGNETIC-FIELDS; VISUAL SPEECH; HUMAN BRAIN; CORTEX; INTEGRATION; PERCEPTION; ATTENTION; LANGUAGE; HUMANS; RECOGNITION;
D O I
10.1162/jocn.2008.21019
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Cross-modal fusion phenomena suggest specific interactions of auditory and visual sensory information both within the speech and nonspeech domains. Using whole-head magnetoencephalography, this study recorded M50 and M100 fields evoked by ambiguous acoustic stimuli that were visually disambiguated to perceived /ta/ or /pa/ syllables. As in natural speech, visual motion onset preceded the acoustic signal by 150 msec. Control conditions included visual and acoustic nonspeech signals as well as visual-only and acoustic-only stimuli. ( a) Both speech and nonspeech motion yielded a consistent attenuation of the auditory M50 field, suggesting a visually induced "preparatory baseline shift'' at the level of the auditory cortex. (b) Within the temporal domain of the auditory M100 field, visual speech and nonspeech motion gave rise to different response patterns ( nonspeech: M100 attenuation; visual /pa/: left-hemisphere M100 enhancement; /ta/: no effect). ( c) These interactions could be further decomposed using a six-dipole model. One of these three pairs of dipoles (V270) was fitted to motion-induced activity at a latency of 270 msec after motion onset, that is, the time domain of the auditory M100 field, and could be attributed to the posterior insula. This dipole source responded to nonspeech motion and visual /pa/, but was found suppressed in the case of visual /ta/. Such a nonlinear interaction might reflect the operation of a binary distinction between the marked phonological feature "labial'' versus its underspecified competitor "coronal.'' Thus, visual processing seems to be shaped by linguistic data structures even prior to its fusion with auditory information channel.
引用
收藏
页码:259 / 274
页数:16
相关论文
共 17 条
  • [1] Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study
    Hertrich, Ingo
    Dietrich, Susanne
    Ackermann, Hermann
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2011, 23 (01) : 221 - 237
  • [2] Time Course of Auditory Processing, Visual Processing, Language and Speech Processing
    Muluk, Nuray Bayar
    Yalcinkaya, Fulya
    JOURNAL OF INTERNATIONAL ADVANCED OTOLOGY, 2010, 6 (02) : 258 - 262
  • [3] Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing
    Moradi, Shahram
    Ronnberg, Jerker
    BRAIN SCIENCES, 2023, 13 (04)
  • [4] An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex
    Okada, Kayoko
    Venezia, Jonathan H.
    Matchin, William
    Saberi, Kourosh
    Hickok, Gregory
    PLOS ONE, 2013, 8 (06):
  • [5] Changes in visually and auditory attended audiovisual speech processing in cochlear implant users: A longitudinal ERP study
    Weglage, Anna
    Layer, Natalie
    Meister, Hartmut
    Mueller, Verena
    Lang-Roth, Ruth
    Walger, Martin
    Sandmann, Pascale
    HEARING RESEARCH, 2024, 447
  • [6] Impaired Auditory Information Processing During Acute Migraine: A Magnetoencephalography Study
    Korostenskaja, Milena
    Pardos, Maria
    Kujala, Teija
    Rose, Douglas F.
    Brown, David
    Horn, Paul
    Wang, Yingying
    Fujiwara, Hisako
    Xiang, Jing
    Kabbouche, Marielle A.
    Powers, Scott W.
    Hershey, Andrew D.
    INTERNATIONAL JOURNAL OF NEUROSCIENCE, 2011, 121 (07) : 355 - 365
  • [7] Sequential audiovisual interactions during speech perception: A whole-head MEG study
    Hertrich, Ingo
    Mathiak, Klaus
    Lutzenberger, Werner
    Menning, Hans
    Ackermann, Hermann
    NEUROPSYCHOLOGIA, 2007, 45 (06) : 1342 - 1354
  • [8] Age-related differences in auditory evoked potentials as a function of task modulation during speech-nonspeech processing
    Rufener, Katharina Simone
    Liem, Franziskus
    Meyer, Martin
    BRAIN AND BEHAVIOR, 2014, 4 (01): : 21 - 28
  • [9] How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing
    Pattamadilok, Chotiga
    Sato, Marc
    BRAIN AND LANGUAGE, 2022, 225
  • [10] Early- and Late-Stage Auditory Processing of Speech Versus Non-Speech Sounds in Children With Autism Spectrum Disorder: An ERP and Oscillatory Activity Study
    Edgar, Elizabeth V.
    Mcguire, Kjersti
    Pelphrey, Kevin A.
    Ventola, Pamela
    van Noordt, Stefon
    Crowley, Michael J.
    DEVELOPMENTAL PSYCHOBIOLOGY, 2024, 66 (08)