Signal detection evidence for limited capacity in visual search

被引:0
作者
Evan M. Palmer
David E. Fencsik
Stephen J. Flusberg
Todd S. Horowitz
Jeremy M. Wolfe
机构
[1] Wichita State University,Department of Psychology
[2] California State University,Department of Psychology
[3] East Bay,Department of Psychology
[4] Stanford University,Department of Ophthalmology
[5] Visual Attention Laboratory,undefined
[6] Brigham and Women’s Hospital,undefined
[7] Harvard Medical School,undefined
来源
Attention, Perception, & Psychophysics | 2011年 / 73卷
关键词
Theoretical and computational attention models; Visual search; Signal detection theory;
D O I
暂无
中图分类号
学科分类号
摘要
The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models.
引用
收藏
页码:2413 / 2424
页数:11
相关论文
共 78 条
[1]  
Baldassi S(2000)Feature-based integration of orientation signals in visual search Vision Research 40 1293-1300
[2]  
Burr DC(2002)Comparing integration rules in visual search Journal of Vision 2 3:559-570
[3]  
Baldassi S(1997)The psychophysics toolbox Spatial Vision 10 433-436
[4]  
Verghese P(2002)Serial attention mechanisms in visual search: A direct behavioral demonstration Journal of Cognitive Neuroscience 14 980-993
[5]  
Brainard DH(2004)Signal detection theory applied to three visual search tasks—identification, yes/no detection and localization Spatial Vision 17 295-325
[6]  
Bricolo E(2001)Covert attention accelerates the rate of visual information processing Proceedings of the National Academy of Sciences 98 5363-5367
[7]  
Gianesini T(2009)Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search Journal of Experimental Psychology. Human Perception and Performance 35 119-132
[8]  
Fanini A(2005)Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method Tutorials in Quantitative Methods for Psychology 1 42-45
[9]  
Bundesen C(2006)Mirror-image symmetry and search asymmetry: A comparison of their effects on visual search and a possible unifying explanation Vision Research 46 1263-1281
[10]  
Chelazzi L(2004)Parallel processing in visual search asymmetry Journal of Experimental Psychology. Human Perception and Performance 30 3-27