The temporal dynamics of conscious and unconscious audio-visual semantic integration

被引:0
作者
Gao, Mingjie [1 ]
Zhu, Weina [1 ]
Drewes, Jan [2 ]
机构
[1] Yunnan Univ, Sch Informat Sci, Kunming 650091, Peoples R China
[2] Sichuan Normal Univ, Inst Brain & Psychol Sci, Chengdu, Peoples R China
基金
中国国家自然科学基金;
关键词
NATURALISTIC SOUNDS; OCULAR DOMINANCE; SPOKEN WORDS; TIME-COURSE; SPEECH; CORRESPONDENCES; IDENTIFICATION; PERCEPTION; COMPONENTS; SOFTWARE;
D O I
10.1016/j.heliyon.2024.e33828
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000,-500,-250 ms SOAs) than naturalistic sounds (-500,-250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000,-500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
引用
收藏
页数:14
相关论文
共 72 条
  • [61] Time course of word identification and semantic integration in spoken language
    Van Petten, C
    Coulson, S
    Rubin, S
    Plante, E
    Parks, M
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 1999, 25 (02) : 394 - 417
  • [62] CONCEPTUAL RELATIONSHIPS BETWEEN SPOKEN WORDS AND ENVIRONMENTAL SOUNDS - EVENT-RELATED BRAIN POTENTIAL MEASURES
    VANPETTEN, C
    RHEINFELDER, H
    [J]. NEUROPSYCHOLOGIA, 1995, 33 (04) : 485 - 508
  • [63] Target categorization with primes that vary in both congruency and sense modality
    Weatherford, Kathryn
    Mills, Michael
    Porter, Anne M.
    Goolkasian, Paula
    [J]. FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [64] Effective analysis of reaction time data
    Whelan, Robert
    [J]. PSYCHOLOGICAL RECORD, 2008, 58 (03) : 475 - 482
  • [65] Controlling low-level image properties: The SHINE toolbox
    Willenbockel, Verena
    Sadr, Javid
    Fiset, Daniel
    Horne, Greg O.
    Gosselin, Frederic
    Tanaka, James W.
    [J]. BEHAVIOR RESEARCH METHODS, 2010, 42 (03) : 671 - 684
  • [66] What You See Is What You Hear: Sounds Alter the Contents of Visual Perception
    Williams, Jamal R.
    Markov, Yuri A.
    Tiurina, Natalia A.
    Stormer, Viola S.
    [J]. PSYCHOLOGICAL SCIENCE, 2022, 33 (12) : 2109 - 2122
  • [67] LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework
    Woellmer, Martin
    Kaiser, Moritz
    Eyben, Florian
    Schuller, Bjoern
    Rigoll, Gerhard
    [J]. IMAGE AND VISION COMPUTING, 2013, 31 (02) : 153 - 163
  • [68] Opposite ERP effects for conscious and unconscious semantic processing under continuous flash suppression
    Yang, Yung-Hao
    Zhou, Jifan
    Li, Kuei-An
    Hung, Tifan
    Pegna, Alan J.
    Yeh, Su-Ling
    [J]. CONSCIOUSNESS AND COGNITION, 2017, 54 : 114 - 128
  • [69] Word meanings survive visual crowding: evidence from ERPs
    Zhou, Jifan
    Lee, Chia-Lin
    Yeh, Su-Ling
    [J]. LANGUAGE COGNITION AND NEUROSCIENCE, 2016, 31 (09) : 1167 - 1177
  • [70] Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?
    Zhou, Jifan
    Lee, Chia-Lin
    Li, Kuei-An
    Tien, Yung-Hsuan
    Yeh, Su-Ling
    [J]. PLOS ONE, 2016, 11 (02):