The temporal dynamics of conscious and unconscious audio-visual semantic integration

被引:0
|
作者
Gao, Mingjie [1 ]
Zhu, Weina [1 ]
Drewes, Jan [2 ]
机构
[1] Yunnan Univ, Sch Informat Sci, Kunming 650091, Peoples R China
[2] Sichuan Normal Univ, Inst Brain & Psychol Sci, Chengdu, Peoples R China
基金
中国国家自然科学基金;
关键词
NATURALISTIC SOUNDS; OCULAR DOMINANCE; SPOKEN WORDS; TIME-COURSE; SPEECH; CORRESPONDENCES; IDENTIFICATION; PERCEPTION; COMPONENTS; SOFTWARE;
D O I
10.1016/j.heliyon.2024.e33828
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000,-500,-250 ms SOAs) than naturalistic sounds (-500,-250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000,-500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Effects of spatial congruity on audio-visual multimodal integration
    Teder-Sälejärvi, WA
    Di Russo, F
    McDonald, JJ
    Hillyard, SA
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2005, 17 (09) : 1396 - 1409
  • [42] ANATOMICAL AND FUNCTIONAL NETWORKS UNDERLYING AUDIO-VISUAL INTEGRATION
    Brang, David
    Zweig, Jacob
    Mishra, Jyoti
    Suzuki, Satoru
    Hillyard, Steven A.
    Ramachandran, Vilayanur S.
    Grabowecky, Marcia
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, : 87 - 87
  • [43] Audio-visual temporal recalibration is driven by decisional processes
    Arnold, D. H.
    Keane, B.
    Yarrow, K.
    PERCEPTION, 2014, 43 (01) : 118 - 118
  • [44] Neural correlates of audio-visual integration: an fMRI study
    Handley, R.
    Reinders, S.
    Marques, T.
    Pariante, C.
    McGuire, P.
    Dazzan, P.
    EARLY INTERVENTION IN PSYCHIATRY, 2008, 2 : A97 - A97
  • [45] Assessing proposed explanations of audio-visual temporal recalibration
    Yarrow, Kielan
    I-PERCEPTION, 2014, 5 (04): : 457 - 457
  • [46] Temporal Feature Prediction in Audio-Visual Deepfake Detection
    Gao, Yuan
    Wang, Xuelong
    Zhang, Yu
    Zeng, Ping
    Ma, Yingjie
    ELECTRONICS, 2024, 13 (17)
  • [47] Temporal aggregation of audio-visual modalities for emotion recognition
    Birhala, Andreea
    Ristea, Catalin Nicolae
    Radoi, Anamaria
    Dutu, Liviu Cristian
    2020 43RD INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2020, : 305 - 308
  • [48] Audio-visual temporal perception in children with restored hearing
    Gori, Monica
    Chilosi, Anna
    Forli, Francesca
    Burr, David
    NEUROPSYCHOLOGIA, 2017, 99 : 350 - 359
  • [49] Effect of Stimulus Duration on Audio-Visual Temporal Recalibration
    Wang, Yaru
    Ichikawa, Makoto
    I-PERCEPTION, 2019, 10 : 145 - 145
  • [50] An audio-visual distance for audio-visual speech vector quantization
    Girin, L
    Foucher, E
    Feng, G
    1998 IEEE SECOND WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 1998, : 523 - 528