Speech and non-speech measures of audiovisual integration are not correlated

被引:0
作者
Jonathan M. P. Wilbiks
Violet A. Brown
Julia F. Strand
机构
[1] University of New Brunswick,Department of Psychology
[2] Washington University in St. Louis,Department of Psychological & Brain Sciences
[3] Carleton College,Department of Psychology
来源
Attention, Perception, & Psychophysics | 2022年 / 84卷
关键词
Audiovisual integration; Individual differences; Convergent validity;
D O I
暂无
中图分类号
学科分类号
摘要
Many natural events generate both visual and auditory signals, and humans are remarkably adept at integrating information from those sources. However, individuals appear to differ markedly in their ability or propensity to combine what they hear with what they see. Individual differences in audiovisual integration have been established using a range of materials, including speech stimuli (seeing and hearing a talker) and simpler audiovisual stimuli (seeing flashes of light combined with tones). Although there are multiple tasks in the literature that are referred to as “measures of audiovisual integration,” the tasks themselves differ widely with respect to both the type of stimuli used (speech versus non-speech) and the nature of the tasks themselves (e.g., some tasks use conflicting auditory and visual stimuli whereas others use congruent stimuli). It is not clear whether these varied tasks are actually measuring the same underlying construct: audiovisual integration. This study tested the relationships among four commonly-used measures of audiovisual integration, two of which use speech stimuli (susceptibility to the McGurk effect and a measure of audiovisual benefit), and two of which use non-speech stimuli (the sound-induced flash illusion and audiovisual integration capacity). We replicated previous work showing large individual differences in each measure but found no significant correlations among any of the measures. These results suggest that tasks that are commonly referred to as measures of audiovisual integration may be tapping into different parts of the same process or different constructs entirely.
引用
收藏
页码:1809 / 1819
页数:10
相关论文
共 92 条
[1]  
Alsius A(2017)Forty years after hearing lips and seeing voices: The McGurk effect revisited Multisensory Research 31 111-144
[2]  
Paré M(2015)Variability and stability in the McGurk effect: Contributions of participants, stimuli, time, and response type Psychonomic Bulletin & Review 22 1299-1307
[3]  
Munhall KG(2010)fMRI-Guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 30 2414-2417
[4]  
Basu Mallick D(2019)“Paying” attention to audiovisual speech: Do incongruent stimuli incur greater costs? Attention, Perception, & Psychophysics 81 1743-1756
[5]  
Magnotti JF(2000)Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex Current Biology: CB 10 649-657
[6]  
Beauchamp MS(2006)Auditory-visual speech perception and synchrony detection for speech and nonspeech signals The Journal of the Acoustical Society of America 119 4065-4073
[7]  
Beauchamp MS(2001)The magical number 4 in short-term memory: A reconsideration of mental storage capacity The Behavioral and Brain Sciences 24 87-114
[8]  
Nath AR(1998)Measures of auditory–visual integration in nonsense syllables and sentences Journal of the Acoustical Society of America 104 2438-2450
[9]  
Pasalar S(2015)A link between individual differences in multisensory speech perception and eye movements Attention, Perception, & Psychophysics 77 1333-1341
[10]  
Brown VA(2019)Age-related sensory decline mediates the sound-induced flash illusion: Evidence for reliability weighting models of multisensory perception Scientific Reports 9 1-12