Cortical operational synchrony during audio-visual speech integration

被引:47
作者
Fingelkurts, AA
Fingelkurts, AA
Krause, CM
Möttönen, R
Sams, M
机构
[1] Moscow MV Lomonosov State Univ, Human Physiol Dept, Human Brain Res Grp, Moscow 119899, Russia
[2] BM Sci Brain & Mind Technol Res Ctr, FI-02601 Espoo, Finland
[3] Univ Helsinki, Cognit Sci Dept Psychol, FIN-00014 Helsinki, Finland
[4] Aalto Univ, Lab Computat Engn, Helsinki 02015, Finland
关键词
multisensory integration; crossmodal; audio-visual; synchronization; operations; large-scale networks; MEG;
D O I
10.1016/S0093-934X(03)00059-2
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration. (C) 2003 Elsevier Science (USA). All rights reserved.
引用
收藏
页码:297 / 312
页数:16
相关论文
共 50 条
[31]   Edged based Audio-Visual Speech enhancement demonstrator [J].
Chen, Song ;
Gogate, Mandar ;
Dashtipour, Kia ;
Kirton-Wingate, Jasper ;
Hussain, Adeel ;
Doctor, Faiyaz ;
Arslan, Tughrul ;
Hussain, Amir .
INTERSPEECH 2024, 2024, :2032-2033
[32]   A model of audio-visual motion integration during active self-movement [J].
Gallagher, Maria ;
Haynes, Joshua D. ;
Culling, John F. ;
Freeman, Tom C. A. .
JOURNAL OF VISION, 2025, 25 (02)
[33]   Optimality and Limitations of Audio-Visual Integration for Cognitive Systems [J].
Boyce, William Paul ;
Lindsay, Anthony ;
Zgonnikov, Arkady ;
Rano, Inaki ;
Wong-Lin, KongFatt .
FRONTIERS IN ROBOTICS AND AI, 2020, 7
[34]   AUDIO-VISUAL SPEECH SEPARATION USING CROSS-MODAL CORRESPONDENCE LOSS [J].
Makishima, Naoki ;
Ihori, Mana ;
Takashima, Akihiko ;
Tanaka, Tomohiro ;
Orihashi, Shota ;
Masumura, Ryo .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6673-6677
[35]   Contributions of local speech encoding and functional connectivity to audio-visual speech perception [J].
Giordano, Bruno L. ;
Ince, Robin A. A. ;
Gross, Joachim ;
Schyns, Philippe G. ;
Panzeri, Stefano ;
Kayser, Christoph .
ELIFE, 2017, 6
[36]   Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions [J].
Roseboom, Warrick ;
Nishida, Shin'ya ;
Fujisaki, Waka ;
Arnold, Derek H. .
PLOS ONE, 2011, 6 (04)
[37]   FaceFilter: Audio-visual speech separation using still images [J].
Chung, Soo-Whan ;
Choe, Soyeon ;
Chung, Joon Son ;
Kang, Hong-Goo .
INTERSPEECH 2020, 2020, :3481-3485
[38]   Audio-visual Multi-channel Recognition of Overlapped Speech [J].
Yu, Jianwei ;
Wu, Bo ;
Gu, Rongzhi ;
Zhang, Shi-Xiong ;
Chen, Lianwu ;
Xu, Yong ;
Yu, Meng ;
Su, Dan ;
Yu, Dong ;
Liu, Xunying ;
Meng, Helen .
INTERSPEECH 2020, 2020, :3496-3500
[39]   Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli [J].
Shoko Kanaya ;
Kazuhiko Yokosawa .
Psychonomic Bulletin & Review, 2011, 18 :123-128
[40]   Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli [J].
Kanaya, Shoko ;
Yokosawa, Kazuhiko .
PSYCHONOMIC BULLETIN & REVIEW, 2011, 18 (01) :123-128