Cortical operational synchrony during audio-visual speech integration

被引:47
作者
Fingelkurts, AA
Fingelkurts, AA
Krause, CM
Möttönen, R
Sams, M
机构
[1] Moscow MV Lomonosov State Univ, Human Physiol Dept, Human Brain Res Grp, Moscow 119899, Russia
[2] BM Sci Brain & Mind Technol Res Ctr, FI-02601 Espoo, Finland
[3] Univ Helsinki, Cognit Sci Dept Psychol, FIN-00014 Helsinki, Finland
[4] Aalto Univ, Lab Computat Engn, Helsinki 02015, Finland
关键词
multisensory integration; crossmodal; audio-visual; synchronization; operations; large-scale networks; MEG;
D O I
10.1016/S0093-934X(03)00059-2
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration. (C) 2003 Elsevier Science (USA). All rights reserved.
引用
收藏
页码:297 / 312
页数:16
相关论文
共 50 条
  • [21] An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits
    Li, Kai
    Xie, Fenghua
    Chen, Hang
    Yuan, Kexin
    Hu, Xiaolin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (10) : 6637 - 6651
  • [22] Somatosensory contribution to audio-visual speech processing
    Ito, Takayuki
    Ohashi, Hiroki
    Gracco, Vincent L.
    CORTEX, 2021, 143 : 195 - 204
  • [23] Improved Lite Audio-Visual Speech Enhancement
    Chuang, Shang-Yi
    Wang, Hsin-Min
    Tsao, Yu
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1345 - 1359
  • [24] Audio-visual speech in noise perception in dyslexia
    van Laarhoven, Thijs
    Keetels, Mirjam
    Schakel, Lemmy
    Vroomen, Jean
    DEVELOPMENTAL SCIENCE, 2018, 21 (01)
  • [25] AUDIO-VISUAL SPEECH INPAINTING WITH DEEP LEARNING
    Morrone, Giovanni
    Michelsanti, Daniel
    Tan, Zheng-Hua
    Jensen, Jesper
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6653 - 6657
  • [26] A ROBUST AUDIO-VISUAL SPEECH ENHANCEMENT MODEL
    Wang, Wupeng
    Xing, Chao
    Wang, Dong
    Chen, Xiao
    Sun, Fengyu
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7529 - 7533
  • [27] Talker variability in audio-visual speech perception
    Heald, Shannon L. M.
    Nusbaum, Howard C.
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [28] The effect of combined sensory and semantic components on audio-visual speech perception in older adults
    Maguinness, Corrina
    Setti, Annalisa
    Burke, Kate E.
    Kenny, Rose Anne
    Newell, Fiona N.
    FRONTIERS IN AGING NEUROSCIENCE, 2011, 3 : 1 - 9
  • [29] DEEP AUDIO-VISUAL SPEECH SEPARATION WITH ATTENTION MECHANISM
    Li, Chenda
    Qian, Yanmin
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7314 - 7318
  • [30] Audio-visual speech perception: a developmental ERP investigation
    Knowland, Victoria C. P.
    Mercure, Evelyne
    Karmiloff-Smith, Annette
    Dick, Fred
    Thomas, Michael S. C.
    DEVELOPMENTAL SCIENCE, 2014, 17 (01) : 110 - 124