Cross-modal contextual memory guides selective attention in visual-search tasks

被引:5
作者
Chen, Siyi [1 ]
Shi, Zhuanghua [1 ,2 ]
Zinchenko, Artyom [1 ]
Mueller, Hermann J. [1 ,2 ]
Geyer, Thomas [1 ,2 ]
机构
[1] Ludwig Maximilians Univ Munchen, Dept Psychol, Gen & Expt Psychol, Munich, Germany
[2] Ludwig Maximilians Univ Munchen, Munich Ctr Neurosci Brain & Mind, Munich, Germany
关键词
CDA; contextual cueing; event-related potentials; multisensory processing; PCN; selective attention; IMPLICIT MEMORY; GUIDANCE; DEPLOYMENT; POTENTIALS; STIMULUS; SYSTEMS; OBJECTS;
D O I
10.1111/psyp.14025
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items ("contextual cueing"). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target.
引用
收藏
页数:15
相关论文
共 62 条
[1]   Contextual Cueing of Tactile Search Is Coded in an Anatomical Reference Frame [J].
Assumpcao, Leonardo ;
Shi, Zhuanghua ;
Zang, Xuelian ;
Muller, Hermann J. ;
Geyer, Thomas .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2018, 44 (04) :566-577
[2]   Contextual cueing: implicit memory of tactile context facilitates tactile search [J].
Assumpcao, Leonardo ;
Shi, Zhuanghua ;
Zang, Xuelian ;
Mueller, Hermann J. ;
Geyer, Thomas .
ATTENTION PERCEPTION & PSYCHOPHYSICS, 2015, 77 (04) :1212-1222
[3]   A SPATIOTEMPORAL DIPOLE MODEL OF THE READINESS POTENTIAL IN HUMANS .2. FOOT MOVEMENT [J].
BOCKER, KBE ;
BRUNIA, CHM ;
CLUITMANS, PJM .
ELECTROENCEPHALOGRAPHY AND CLINICAL NEUROPHYSIOLOGY, 1994, 91 (04) :286-294
[4]   The psychophysics toolbox [J].
Brainard, DH .
SPATIAL VISION, 1997, 10 (04) :433-436
[5]   Using real-world scenes as contextual cues for search [J].
Brockmole, JR ;
Henderson, JM .
VISUAL COGNITION, 2006, 13 (01) :99-108
[6]   MOVEMENT-RELATED SLOW POTENTIALS .1. A CONTRAST BETWEEN FINGER AND FOOT MOVEMENTS IN RIGHT-HANDED SUBJECTS [J].
BRUNIA, CHM ;
VANDENBOSCH, WEJ .
ELECTROENCEPHALOGRAPHY AND CLINICAL NEUROPHYSIOLOGY, 1984, 57 (06) :515-527
[7]   When Visual Distractors Predict Tactile Search: The Temporal Profile of Cross-Modal Spatial Learning [J].
Chen, Siyi ;
Shi, Zhuanghua ;
Mueller, Hermann J. ;
Geyer, Thomas .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2021, 47 (09) :1453-1470
[8]   Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search [J].
Chen, Siyi ;
Shi, Zhuanghua ;
Mueller, Hermann J. ;
Geyer, Thomas .
SCIENTIFIC REPORTS, 2021, 11 (01)
[9]   Crossmodal learning of target-context associations: When would tactile context predict visual search? [J].
Chen, Siyi ;
Shi, Zhuanghua ;
Zang, Xuelian ;
Zhu, Xiuna ;
Assumpcao, Leonardo ;
Muller, Hermann J. ;
Geyer, Thomas .
ATTENTION PERCEPTION & PSYCHOPHYSICS, 2020, 82 (04) :1682-1694
[10]   Contextual cueing of visual attention [J].
Chun, MM .
TRENDS IN COGNITIVE SCIENCES, 2000, 4 (05) :170-178