Reliably measuring learning-dependent distractor suppression with eye tracking

被引:2
作者
Kim, Andy J. [1 ]
Gregoire, Laurent [2 ]
Anderson, Brian A. [2 ]
机构
[1] Univ Southern Calif, Sch Gerontol, 3715 McClintock Ave, Los Angeles, CA 90089 USA
[2] Texas A&M Univ, Dept Psychol & Brain Sci, College Stn, TX USA
关键词
Reliability; Attention capture; Distractor suppression; Visual search; INDIVIDUAL-DIFFERENCES; CAPTURE; TIME; RELIABILITY; COLOR;
D O I
10.3758/s13428-024-02552-8
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
In the field of psychological science, behavioral performance in computer-based cognitive tasks often exhibits poor reliability. The absence of reliable measures of cognitive processes contributes to non-reproducibility in the field and impedes the investigation of individual differences. Specifically in visual search paradigms, response time-based measures have shown poor test-retest reliability and internal consistency across attention capture and distractor suppression, but one study has demonstrated the potential for oculomotor measures to exhibit superior reliability. Therefore, in this study, we investigated three datasets to compare the reliability of learning-dependent distractor suppression measured via distractor fixations (oculomotor capture) and latency to fixate the target (fixation times). Our findings reveal superior split-half reliability of oculomotor capture compared to that of fixation times regardless of the critical distractor comparison, with the reliability of oculomotor capture in most cases falling within the range that is acceptable for the investigation of individual differences. We additionally find that older adults have superior oculomotor reliability compared with young adults, potentially addressing a significant limitation in the aging literature of high variability in response time measures due to slower responses. Our findings highlight the utility of measuring eye movements in the pursuit of reliable indicators of distractor processing and the need to further test and develop additional measures in other sensory domains to maximize statistical power, reliability, and reproducibility.
引用
收藏
页数:9
相关论文
共 39 条
  • [1] An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science
    Alexander, Anita
    Barnett-Cowan, Michael
    Bartmess, Elizabeth
    Bosco, Frank A.
    Brandt, Mark
    Carp, Joshua
    Chandler, Jesse J.
    Clay, Russ
    Cleary, Hayley
    Cohn, Michael
    Costantini, Giulio
    DeCoster, Jamie
    Dunn, Elizabeth
    Eggleston, Casey
    Estel, Vivien
    Farach, Frank J.
    Feather, Jenelle
    Fiedler, Susann
    Field, James G.
    Foster, Joshua D.
    Frank, Michael
    Frazier, Rebecca S.
    Fuchs, Heather M.
    Galak, Jeff
    Galliani, Elisa Maria
    Garcia, Sara
    Giammanco, Elise M.
    Gilbert, Elizabeth A.
    Giner-Sorolla, Roger
    Goellner, Lars
    Goh, Jin X.
    Goss, R. Justin
    Graham, Jesse
    Grange, James A.
    Gray, Jeremy R.
    Gripshover, Sarah
    Hartshorne, Joshua
    Hayes, Timothy B.
    Jahn, Georg
    Johnson, Kate
    Johnston, William
    Joy-Gaba, Jennifer A.
    Lai, Calvin K.
    Lakens, Daniel
    Lane, Kristin
    LeBel, Etienne P.
    Lee, Minha
    Lemm, Kristi
    Mackinnon, Sean
    May, Michael
    [J]. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2012, 7 (06) : 657 - 660
  • [2] Test-retest reliability of value-driven attentional capture
    Anderson, Brian A.
    Kim, Haena
    [J]. BEHAVIOR RESEARCH METHODS, 2019, 51 (02) : 720 - 726
  • [3] Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing
    Anderson, Brian A.
    Yantis, Steven
    [J]. ATTENTION PERCEPTION & PSYCHOPHYSICS, 2012, 74 (08) : 1644 - 1653
  • [4] Value-driven attentional capture
    Anderson, Brian A.
    Laurent, Patryk A.
    Yantis, Steven
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2011, 108 (25) : 10367 - 10371
  • [5] OVERRIDING STIMULUS-DRIVEN ATTENTIONAL CAPTURE
    BACON, WF
    EGETH, HE
    [J]. PERCEPTION & PSYCHOPHYSICS, 1994, 55 (05): : 485 - 496
  • [6] Power Contours: Optimising Sample Size and Precision in Experimental Psychology and Human Neuroscience
    Baker, Daniel H.
    Vilidaite, Greta
    Lygo, Freya A.
    Smith, Anika K.
    Flack, Tessa R.
    Gouws, Andre D.
    Andrews, Timothy J.
    [J]. PSYCHOLOGICAL METHODS, 2021, 26 (03) : 295 - 314
  • [7] Evaluating Individual Differences in Psychological Processes
    Bauer, Daniel J.
    [J]. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2011, 20 (02) : 115 - 118
  • [8] Designing and evaluating tasks to measure individual differences in experimental psychology: a tutorial
    Brysbaert, Marc
    [J]. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS, 2024, 9 (01)
  • [9] Test-retest reliability for common tasks in vision science
    Clark, Kait
    Birch-Hurst, Kayley
    Pennington, Charlotte R.
    Petrie, Austin C. P.
    Lee, Joshua T.
    Hedge, Craig
    [J]. JOURNAL OF VISION, 2022, 22 (08):
  • [10] What is wrong with individual differences research?
    Cooper, Colin
    [J]. PERSONALITY AND INDIVIDUAL DIFFERENCES, 2024, 221