Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks

被引:10
|
作者
Suerig, Ralf [1 ]
Bottari, Davide [1 ,2 ]
Roeder, Brigitte [1 ]
机构
[1] Univ Hamburg, Biol Psychol & Neuropsychol, Von Melle Pk 11, D-20146 Hamburg, Germany
[2] IMT Sch Adv Studies Lucca, Lucca, Italy
基金
欧洲研究理事会;
关键词
Multisensory learning; temporal cross-modal binding; spatial cross-modal binding; REACTION-TIME; INTERSENSORY FACILITATION; MULTISENSORY INTEGRATION; REDUNDANT-SIGNALS; WORKING-MEMORY; ASYNCHRONY; ENHANCEMENT; VENTRILOQUISM; INFORMATION; SENSITIVITY;
D O I
10.1163/22134808-00002611
中图分类号
Q6 [生物物理学];
学科分类号
071011 ;
摘要
Temporal and spatial characteristics of sensory inputs are fundamental to multisensory integration because they provide probabilistic information as to whether or not multiple sensory inputs belong to the same event. The multisensory temporal binding window defines the time range within which two stimuli of different sensory modalities are merged into one percept and has been shown to depend on training. The aim of the present study was to evaluate the role of the training procedure for improving multisensory temporal discrimination and to test for a possible transfer of training to other multisensory tasks. Participants were trained over five sessions in a two-alternative forced-choice simultaneity judgment task. The task difficulty of each trial was either at each participant's threshold (adaptive group) or randomly chosen (control group). A possible transfer of improved multisensory temporal discrimination on multisensory binding was tested with a redundant signal paradigm in which the temporal alignment of auditory and visual stimuli was systematically varied. Moreover, the size of the spatial audio-visual ventriloquist effect was assessed. Adaptive training resulted in faster improvements compared to the control condition. Transfer effects were found for both tasks: The processing speed of auditory inputs and the size of the ventriloquist effect increased in the adaptive group following the training. We suggest that the relative precision of the temporal and spatial features of a cross-modal stimulus is weighted during multisensory integration. Thus, changes in the precision of temporal processing are expected to enhance the likelihood of multisensory integration for temporally aligned cross-modal stimuli.
引用
收藏
页码:556 / 578
页数:23
相关论文
共 50 条
  • [1] Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus
    Fleming, Justin T.
    Noyce, Abigail L.
    Shinn-Cunningham, Barbara G.
    NEUROPSYCHOLOGIA, 2020, 146
  • [2] The Development of Audio-Visual Integration for Temporal Judgements
    Adams, Wendy J.
    PLOS COMPUTATIONAL BIOLOGY, 2016, 12 (04)
  • [3] Visual field differences in temporal synchrony processing for audio-visual stimuli
    Takeshima, Yasuhiro
    PLOS ONE, 2021, 16 (12):
  • [4] Feedback Modulates Audio-Visual Spatial Recalibration
    Kramer, Alexander
    Roeder, Brigitte
    Bruns, Patrick
    FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2020, 13
  • [5] Insights into audio-visual temporal perception in bipolar disorder and schizophrenia
    Gori, Monica
    Amadeo, Maria Bianca
    Escelsior, Andrea
    Esposito, Davide
    Inuggi, Alberto
    Guglielmo, Riccardo
    Polena, Luis
    Bode, Juxhin
    da Silva, Beatriz Pereira
    Amore, Mario
    Serafini, Gianluca
    CURRENT RESEARCH IN BEHAVIORAL SCIENCES, 2025, 8
  • [6] Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration
    Ikumi, Nara
    Soto-Faraco, Salvador
    PLOS ONE, 2014, 9 (07):
  • [7] Modulation of visual responses in the superior temporal sulcus by audio-visual congruency
    Dahl, Christoph D.
    Logothetis, Nikos K.
    Kayser, Christoph
    FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2010, 4
  • [8] Audio-Visual Detection Benefits in the Rat
    Gleiss, Stephanie
    Kayser, Christoph
    PLOS ONE, 2012, 7 (09):
  • [9] Audio-Visual Fusion With Temporal Convolutional Attention Network for Speech Separation
    Liu, Debang
    Zhang, Tianqi
    Christensen, Mads Graesboll
    Yi, Chen
    An, Zeliang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4647 - 4660
  • [10] Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
    Ikumi, Nara
    Soto-Faraco, Salvador
    FRONTIERS IN INTEGRATIVE NEUROSCIENCE, 2017, 10