Real-world object categories and scene contexts conjointly structure statistical learning for the guidance of visual search

被引:0
作者
Ariel M. Kershner
Andrew Hollingworth
机构
[1] The University of Iowa,Department of Psychological and Brain Sciences
来源
Attention, Perception, & Psychophysics | 2022年 / 84卷
关键词
Visual search; Statistical learning; Categorical cuing;
D O I
暂无
中图分类号
学科分类号
摘要
We examined how object categories and scene contexts act in conjunction to structure the acquisition and use of statistical regularities to guide visual search. In an exposure session, participants viewed five object exemplars in each of two colors in each of 42 real-world categories. Objects were presented individually against scene context backgrounds. Exemplars within a category were presented with different contexts as a function of color (e.g., the five red staplers were presented with a classroom scene, and the five blue staplers with an office scene). Participants then completed a visual search task, in which they searched for novel exemplars matching a category label cue among arrays of eight objects superimposed over a scene background. In the context-match condition, the color of the target exemplar was consistent with the color associated with that combination of category and scene context from the exposure phase (e.g., a red stapler in a classroom scene). In the context-mismatch condition, the color of the target was not consistent with that association (e.g., a red stapler in an office scene). In two experiments, search response time was reliably lower in the context-match than in the context-mismatch condition, demonstrating that the learning of category-specific color regularities was itself structured by scene context. The results indicate that categorical templates retrieved from long-term memory are biased toward the properties of recent exemplars and that this learning is organized in a scene-specific manner.
引用
收藏
页码:1304 / 1316
页数:12
相关论文
共 103 条
[1]  
Alexander RG(2011)Visual similarity effects in categorical search Journal of Vision 11 1-15
[2]  
Zelinsky GJ(1991)Specializing the operation of an explicit rule Journal of Experimental Psychology: General 120 3-19
[3]  
Allen SW(2015)Value-driven attentional priority is context specific Psychonomic Bulletin & Review 22 750-756
[4]  
Brooks LR(2019)Selection history in context: Evidence for the role of reinforcement learning in biasing attention Attention, Perception, & Psychophysics 81 2666-2672
[5]  
Anderson BA(2011)Value-driven attentional capture Proceedings of the National Academy of Sciences of the United States of America 108 10367-10371
[6]  
Anderson BA(2019)Recognition of incidentally learned visual search arrays is supported by fixational eye movements Journal of Experimental Psychology. Learning, Memory, and Cognition 45 2147-2164
[7]  
Britton MK(2012)Top-down versus bottom-up attentional control: a failed theoretical dichotomy Trends in Cognitive Sciences 16 437-443
[8]  
Anderson BA(2008)Mixed-effects modeling with crossed random effects for subjects and items Journal of Memory and Language 59 390-412
[9]  
Laurent PA(2017)Mental reinstatement of encoding context improves episodic remembering Cortex 94 15-26
[10]  
Yantis S(2006)Using real-world scenes as contextual cues for search Visual Cognition 13 99-108