Specifying the Precision of Guiding Features for Visual Search

被引:24
作者
Alexander, Robert G. [1 ,3 ]
Nahvi, Roxanna J. [1 ]
Zelinsky, Gregory J. [1 ,2 ]
机构
[1] SUNY Stony Brook, Dept Psychol, Stony Brook, NY 11794 USA
[2] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
[3] Suny Downstate Med Ctr, Dept Ophthalmol, 450 Clarkson Ave,MSC 58, Brooklyn, NY 11203 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
eye movements; feature guidance; target templates; visual memory precision; visual search; WORKING-MEMORY; EYE-MOVEMENTS; ILLUSORY CONJUNCTIONS; TARGET; ATTENTION; ORIENTATION; GUIDANCE; OBJECTS; COLOR; SHAPE;
D O I
10.1037/xhp0000668
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Visual search is the task of finding things with uncertain locations. Despite decades of research. the features that guide visual search remain poorly specified, especially in realistic contexts. This study tested the role of two features-shape and orientation-both in the presence and absence of hue information. We conducted five experiments to describe preview-target mismatch effects, decreases in performance caused by differences between the image of the target as it appears in the preview and as it appears in the actual search display. These mismatch effects provide direct measures of feature importance, with larger performance decrements expected for more important features. Contrary to previous conclusions. our data suggest that shape and orientation only guide visual search when color is not available. By varying the probability of mismatch in each feature dimension, we also show that these patterns of feature guidance do not change with the probability that the previewed feature will be invalid. We conclude that the target representations used to guide visual search are much less precise than previously believed, with participants encoding and using color and little else.
引用
收藏
页码:1248 / 1264
页数:17
相关论文
共 75 条
[1]   A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search [J].
Adeli, Hossein ;
Vitu, Francoise ;
Zelinsky, Gregory J. .
JOURNAL OF NEUROSCIENCE, 2017, 37 (06) :1453-1467
[2]   Occluded information is restored at preview but not during visual search [J].
Alexander, Robert G. ;
Zelinsky, Gregory J. .
JOURNAL OF VISION, 2018, 18 (11) :1-16
[3]   Are summary statistics enough? Evidence for the importance of shape in guiding visual search [J].
Alexander, Robert G. ;
Schmidt, Joseph ;
Zelinsky, Gregory J. .
VISUAL COGNITION, 2014, 22 (3-4) :595-609
[4]   Effects of part-based similarity on visual search: The Frankenbear experiment [J].
Alexander, Robert G. ;
Zelinsky, Gregory J. .
VISION RESEARCH, 2012, 54 :20-30
[5]   Visual similarity effects in categorical search [J].
Alexander, Robert G. ;
Zelinsky, Gregory J. .
JOURNAL OF VISION, 2011, 11 (08)
[6]  
[Anonymous], J VISION
[7]   A formal theory of feature binding in object perception [J].
Ashby, FG ;
Prinzmetal, W ;
Ivry, R ;
Maddox, WT .
PSYCHOLOGICAL REVIEW, 1996, 103 (01) :165-192
[8]   One-shot viewpoint invariance in matching novel objects [J].
Biederman, I ;
Bar, M .
VISION RESEARCH, 1999, 39 (17) :2885-2899
[9]   RECOGNITION-BY-COMPONENTS - A THEORY OF HUMAN IMAGE UNDERSTANDING [J].
BIEDERMAN, I .
PSYCHOLOGICAL REVIEW, 1987, 94 (02) :115-147
[10]   The specificity of the search template [J].
Bravo, Mary J. ;
Farid, Hany .
JOURNAL OF VISION, 2009, 9 (01)