Audiovisual integration of speech: evidence for increased accuracy in “talk” versus “listen” condition

被引:0
|
作者
Lefteris Themelis Zografos [1 ]
Anna Konstantoulaki [2 ]
Christoph Klein [1 ]
Argiro Vatakis [3 ]
Nikolaos Smyrnis [4 ]
机构
[1] University Mental Health Neurosciences and Precision Medicine Research Institute “COSTAS STEFANIS”,Laboratory of Cognitive Neuroscience and Sensorimotor Control
[2] Panteion University of Social and Political Sciences,Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology
[3] National,2nd Psychiatry Department
[4] and Kapodistrian University of Athens,Department of Child and Adolescent Psychiatry
[5] Medical School,Department of Child and Adolescent Psychiatry, Medical Faculty
[6] University General Hospital «AΤΤΙΚΟΝ»,undefined
[7] University of Freiburg,undefined
[8] University of Cologne,undefined
关键词
Multisensory integration; Temporal binding window; Self-generated action; Movement; Speech;
D O I
10.1007/s00221-025-07088-7
中图分类号
学科分类号
摘要
Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech (“talk”, active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech (“listen”, passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.
引用
收藏
相关论文
共 5 条
  • [1] Electrophysiological evidence for speech-specific audiovisual integration
    Baart, Martijn
    Stekelenburg, Jeroen J.
    Vroomen, Jean
    NEUROPSYCHOLOGIA, 2014, 53 : 115 - 121
  • [2] Assessing automaticity in audiovisual speech integration: evidence from the speeded classification task
    Soto-Faraco, S
    Navarra, J
    Alsius, A
    COGNITION, 2004, 92 (03) : B13 - B23
  • [3] Increased sub-clinical levels of autistic traits are associated with reduced multisensory integration of audiovisual speech
    Van Laarhoven, Thijs
    Stekelenburg, Jeroen J.
    Vroomen, Jean
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [4] Audiovisual speech integration in pervasive developmental disorder:: evidence from event-related potentials
    Magnee, Maurice J. C. M.
    de Gelder, Beatrice
    van Engeland, Herman
    Kemner, Chantal
    JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY, 2008, 49 (09) : 995 - 1000
  • [5] Audiovisual Speech Integration Does Not Rely on the Motor System: Evidence from Articulatory Suppression, the McGurk Effect, and fMRI
    Matchin, William
    Groulx, Kier
    Hickok, Gregory
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2014, 26 (03) : 606 - 620