Occluded information is restored at preview but not during visual search

被引:6
作者
Alexander, Robert G. [1 ]
Zelinsky, Gregory J. [1 ,2 ]
机构
[1] SUNY Stony Brook, Dept Psychol, Stony Brook, NY 11794 USA
[2] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
visual search; eye movements; occlusion; target template; perceptual filling-in; AMODAL COMPLETION; SHAPE COMPLETION; GUIDED SEARCH; EYE-MOVEMENTS; TARGET; MODEL; FEATURES; OBJECTS; REPRESENTATION; INTEGRATION;
D O I
10.1167/18.11.4
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Objects often appear with some amount of occlusion. We fill in missing information using local shape features even before attending to those objects-a process called amodal completion. Here we explore the possibility that knowledge about common realistic objects can be used to "restore" missing information even in cases where amodal completion is not expected. We systematically varied whether visual search targets were occluded or not, both at preview and in search displays. Button-press responses were longest when the preview was unoccluded and the target was occluded in the search display. This pattern is consistent with a target-verification process that uses the features visible at preview but does not restore missing information in the search display. However, visual search guidance was weakest whenever the target was occluded in the search display, regardless of whether it was occluded at preview. This pattern suggests that information missing during the preview was restored and used to guide search, thereby resulting in a feature mismatch and poor guidance. If this process were preattentive, as with amodal completion, we should have found roughly equivalent search guidance across all conditions because the target would always be unoccluded or restored, resulting in no mismatch. We conclude that realistic objects are restored behind occluders during search target preview, even in situations not prone to amodal completion, and this restoration does not occur preattentively during search.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 45 条
[1]   A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search [J].
Adeli, Hossein ;
Vitu, Francoise ;
Zelinsky, Gregory J. .
JOURNAL OF NEUROSCIENCE, 2017, 37 (06) :1453-1467
[2]   Are summary statistics enough? Evidence for the importance of shape in guiding visual search [J].
Alexander, Robert G. ;
Schmidt, Joseph ;
Zelinsky, Gregory J. .
VISUAL COGNITION, 2014, 22 (3-4) :595-609
[3]   Effects of part-based similarity on visual search: The Frankenbear experiment [J].
Alexander, Robert G. ;
Zelinsky, Gregory J. .
VISION RESEARCH, 2012, 54 :20-30
[4]   Visual similarity effects in categorical search [J].
Alexander, Robert G. ;
Zelinsky, Gregory J. .
JOURNAL OF VISION, 2011, 11 (08)
[5]  
[Anonymous], J VISION
[6]   Typicality aids search for an unspecified target, but only in identification and not in attentional guidance [J].
Castelhano, Monica S. ;
Pollatsek, Alexander ;
Cave, Kyle R. .
PSYCHONOMIC BULLETIN & REVIEW, 2008, 15 (04) :795-801
[7]   Real-world visual search is dominated by top-down guidance [J].
Chen, Xin ;
Zelinsky, Gregory J. .
VISION RESEARCH, 2006, 46 (24) :4118-4133
[8]  
Davis G, 1998, J EXP PSYCHOL HUMAN, V24, P169
[9]   VISUAL-SEARCH AND STIMULUS SIMILARITY [J].
DUNCAN, J ;
HUMPHREYS, GW .
PSYCHOLOGICAL REVIEW, 1989, 96 (03) :433-458
[10]   A Bayesian model for efficient visual search and recognition [J].
Elazary, Lior ;
Itti, Laurent .
VISION RESEARCH, 2010, 50 (14) :1338-1352