Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli

被引:0
|
作者
Argiro Vatakis
Charles Spence
机构
[1] University of Oxford,Department of Experimental Psychology
来源
Perception & Psychophysics | 2007年 / 69卷
关键词
Stimulus Onset Asynchrony; Video Clip; Multisensory Integration; Temporal Order Judgment; Speech Stimulus;
D O I
暂无
中图分类号
学科分类号
摘要
We investigated whether the “unity assumption,” according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1–3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the “unity assumption” in the domain of the multisensory temporal integration of audiovisual speech stimuli.
引用
收藏
页码:744 / 756
页数:12
相关论文
共 7 条
  • [1] Unity Assumption in Audiovisual Emotion Perception
    Sou, Ka Lon
    Say, Ashley
    Xu, Hong
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [2] The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues
    Chuen, Lorraine
    Schutz, Michael
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2016, 78 (05) : 1512 - 1528
  • [3] Audiovisual crossmodal correspondences and sound symbolism: a study using the implicit association test
    Parise, Cesare V.
    Spence, Charles
    EXPERIMENTAL BRAIN RESEARCH, 2012, 220 (3-4) : 319 - 333
  • [4] The temporal binding window for audiovisual speech: Children are like little adults
    Hillock-Dunn, Andrea
    Grantham, D. Wesley
    Wallace, Mark T.
    NEUROPSYCHOLOGIA, 2016, 88 : 74 - 82
  • [5] Functional localization of audiovisual speech using near infrared spectroscopy
    Iliza M. Butera
    Eric D. Larson
    Andrea J. DeFreese
    Adrian KC Lee
    René H. Gifford
    Mark T. Wallace
    Brain Topography, 2022, 35 : 416 - 430
  • [6] Functional localization of audiovisual speech using near infrared spectroscopy
    Butera, Iliza M.
    Larson, Eric D.
    DeFreese, Andrea J.
    Lee, Adrian Kc
    Gifford, Rene H.
    Wallace, Mark T.
    BRAIN TOPOGRAPHY, 2022, 35 (04) : 416 - 430
  • [7] Eye Can Hear Clearly Now: Inverse Effectiveness in Natural Audiovisual Speech Processing Relies on Long-Term Crossmodal Temporal Integration
    Crosse, Michael J.
    Di Liberto, Giovanni M.
    Lalor, Edmund C.
    JOURNAL OF NEUROSCIENCE, 2016, 36 (38) : 9888 - 9895