共 5 条
Audiovisual integration of speech: evidence for increased accuracy in “talk” versus “listen” condition
被引:0
|作者:
Lefteris Themelis Zografos
[1
]
Anna Konstantoulaki
[2
]
Christoph Klein
[1
]
Argiro Vatakis
[3
]
Nikolaos Smyrnis
[4
]
机构:
[1] University Mental Health Neurosciences and Precision Medicine Research Institute “COSTAS STEFANIS”,Laboratory of Cognitive Neuroscience and Sensorimotor Control
[2] Panteion University of Social and Political Sciences,Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology
[3] National,2nd Psychiatry Department
[4] and Kapodistrian University of Athens,Department of Child and Adolescent Psychiatry
[5] Medical School,Department of Child and Adolescent Psychiatry, Medical Faculty
[6] University General Hospital «AΤΤΙΚΟΝ»,undefined
[7] University of Freiburg,undefined
[8] University of Cologne,undefined
关键词:
Multisensory integration;
Temporal binding window;
Self-generated action;
Movement;
Speech;
D O I:
10.1007/s00221-025-07088-7
中图分类号:
学科分类号:
摘要:
Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech (“talk”, active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech (“listen”, passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.
引用
收藏
相关论文