Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness

被引:11
作者
Delong, Patrycja [1 ]
Noppeney, Uta [1 ,2 ]
机构
[1] Univ Birmingham, Ctr Computat Neurosci & Cognit Robot, Birmingham, W Midlands, England
[2] Radboud Univ Nijmegen, Donders Inst Brain Cognit & Behav, Nijmegen, Netherlands
基金
欧洲研究理事会;
关键词
MULTISENSORY INTEGRATION; VISUAL TARGET; CONSCIOUSNESS; ADAPTATION; DISCRIMINATION; INFERENCE; VISION; SPEECH;
D O I
10.1038/s41598-021-90183-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Information integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward-backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture's visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers' awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.
引用
收藏
页数:14
相关论文
共 74 条
[1]   A phonologically congruent sound boosts a visual target into perceptual awareness [J].
Adam, Ruth ;
Noppeney, Uta .
FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2014, 8 :1-13
[2]   The ventriloquist effect results from near-optimal bimodal integration [J].
Alais, D ;
Burr, D .
CURRENT BIOLOGY, 2004, 14 (03) :257-262
[3]   To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference [J].
Aller, Mate ;
Noppeney, Uta .
PLOS BIOLOGY, 2019, 17 (04)
[4]   A spatially collocated sound thrusts a flash into awareness [J].
Aller, Mate ;
Giani, Anette ;
Conrad, Verena ;
Watanabe, Masataka ;
Noppeney, Uta .
FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2015, 9 :1-8
[5]   Detection of Audiovisual Speech Correspondences Without Visual Awareness [J].
Alsius, Agnes ;
Munhall, Kevin G. .
PSYCHOLOGICAL SCIENCE, 2013, 24 (04) :423-431
[6]   Two stages in crossmodal saccadic integration: evidence from a visual-auditory focused attention task [J].
Arndt, PA ;
Colonius, H .
EXPERIMENTAL BRAIN RESEARCH, 2003, 150 (04) :417-426
[7]   Global workspace theory of consciousness: toward a cognitive neuroscience of human experience [J].
Baars, BJ .
BOUNDARIES OF CONSCIOUSNESS: NEUROBIOLOGY AND NEUROPATHOLOGY, 2005, 150 :45-53
[8]   The conscious access hypothesis: origins and recent evidence [J].
Baars, BJ .
TRENDS IN COGNITIVE SCIENCES, 2002, 6 (01) :47-52
[9]   Fitting Linear Mixed-Effects Models Using lme4 [J].
Bates, Douglas ;
Maechler, Martin ;
Bolker, Benjamin M. ;
Walker, Steven C. .
JOURNAL OF STATISTICAL SOFTWARE, 2015, 67 (01) :1-48
[10]  
Bertelson P., 1994, ICSLP 94. 1994 International Conference on Spoken Language Processing, P559