A biphasic effect of cross-modal priming on visual shape recognition

被引:3
|
作者
Kwok, Sze Chai [1 ,2 ,3 ]
Fantoni, Carlo [4 ]
Tamburini, Laura [4 ]
Wang, Lei [1 ]
Gerbino, Walter [4 ]
机构
[1] East China Normal Univ, Minist Educ, Fac Educ,Shanghai Key Lab Brain Funct Gen, Sch Psychol & Cognit Sci,Key Lab Brain Funct Geno, Shanghai 200062, Peoples R China
[2] East China Normal Univ, Shanghai Key Lab Magnet Resonance, Shanghai 200062, Peoples R China
[3] NYU Shanghai, NYU ECNU Inst Brain & Cognit Sci, Shanghai 200062, Peoples R China
[4] Univ Trieste, Dept Life Sci, Psychol Unit Gaetano Kanizsa, Trieste, Italy
基金
上海市自然科学基金;
关键词
Attention; Cross-modal correspondence; Recognition memory; Priming; EPISODIC MEMORY; PARIETAL CORTEX; WORKING-MEMORY; INHIBITION; ATTENTION; IMPACT; RETURN;
D O I
10.1016/j.actpsy.2017.12.013
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We used a cross-modal priming paradigm to evoke a biphasic effect in visual short-term memory. Participants were required to match the memorandum (a visual shape, either spiky or curvy) to a delayed probe (a shape belonging to the same category). In two-thirds of trials the sequence of shapes was accompanied by a task irrelevant sound (either tzk or upo, cross-modally correspondent to spiky and curvy shape categories, respectively). The biphasic effect occurred when a congruent vs. incongruent sound was presented 200 ms after the memorandum, while it did not occur when the sound was presented 200 ms before or simultaneously with it. The biphasic pattern of recognition sensitivities was revealed by an interaction between cross-modal congruency and probe delay, such that sensitivity was higher for visual shapes paired with a congruent rather than incongruent sound with a 300-ms delay, while the opposite was true with a 1300-ms delay. We suggest that this biphasic pattern of recognition sensitivities was dependent on the task-irrelevant sound activating different levels of shape processing as a function of the relative timing of sound, memorandum, and probe.
引用
收藏
页码:43 / 50
页数:8
相关论文
共 50 条
  • [21] The context-contingent nature of cross-modal activations of the visual cortex
    Matusz, Pawel J.
    Retsa, Chrysa
    Murray, Micah M.
    NEUROIMAGE, 2016, 125 : 996 - 1004
  • [22] The dog's meow: asymmetrical interaction in cross-modal object recognition
    Yuval-Greenberg, Shlomit
    Deouell, Leon Y.
    EXPERIMENTAL BRAIN RESEARCH, 2009, 193 (04) : 603 - 614
  • [23] Cross-modal metacognition: Visual and tactile confidence share a common scale
    Klever, Lena
    Beyvers, Marie Christin
    Fiehler, Katja
    Mamassian, Pascal
    Billino, Jutta
    JOURNAL OF VISION, 2023, 23 (05):
  • [24] Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
    Wang, Xiaodong
    Guo, Xiaotao
    Chen, Lin
    Liu, Yijun
    Goldberg, Michael E.
    Xu, Hong
    CEREBRAL CORTEX, 2017, 27 (02) : 1337 - 1346
  • [25] Different visual and auditory latencies affect cross-modal non-spatial repetition inhibition
    Wu, Xiaogang
    Wang, Aijun
    Tang, Xiaoyu
    Zhang, Ming
    ACTA PSYCHOLOGICA, 2019, 200
  • [26] Cross-modal feature and conjunction errors in recognition memory
    Jones, TC
    Jacoby, LL
    Gellis, LA
    JOURNAL OF MEMORY AND LANGUAGE, 2001, 44 (01) : 131 - 152
  • [27] A NEUROCOGNITIVE STUDY OF LAUGHTER USING A CROSS-MODAL EMOTION PRIMING PARADIGM
    Amoss, Richard T.
    Frishkoff, Gwen A.
    PSYCHOPHYSIOLOGY, 2014, 51 : S58 - S58
  • [28] Development of cross-modal processing
    Robinson, Christopher W.
    Sloutsky, Vladimir M.
    WILEY INTERDISCIPLINARY REVIEWS-COGNITIVE SCIENCE, 2010, 1 (01) : 135 - 141
  • [29] Cross-modal transfer after auditory task-switching training
    Kattner, Florian
    Samaan, Larissa
    Schubert, Torsten
    MEMORY & COGNITION, 2019, 47 (05) : 1044 - 1061
  • [30] Cross-Modal Correspondence Between Speech Sound and Visual Shape Influencing Perceptual Representation of Shape: the Role of Articulation and Pitch
    Kwak, Yuna
    Nam, Hosung
    Kim, Hyun-Woong
    Kim, Chai-Youn
    MULTISENSORY RESEARCH, 2020, 33 (06) : 569 - 598