Defending Yarbus: Eye movements reveal observers' task

被引:174
作者
Borji, Ali [1 ]
Itti, Laurent [1 ,2 ,3 ]
机构
[1] Univ So Calif, Dept Comp Sci, Los Angeles, CA 90089 USA
[2] Univ So Calif, Grad Program Neurosci, Los Angeles, CA 90089 USA
[3] Univ So Calif, Dept Psychol, Los Angeles, CA 90089 USA
来源
JOURNAL OF VISION | 2014年 / 14卷 / 03期
基金
美国国家科学基金会;
关键词
visual attention; eye movements; bottom-up saliency; top-down attention; free viewing; visual search; mind reading; task decoding; Yarbus; VISUAL-ATTENTION; SALIENCY DETECTION; MEMORY; MODEL; OVERT; PERCEPTION; BEHAVIOR; PREDICT; SEARCH; GAZE;
D O I
10.1167/14.3.29
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
In a very influential yet anecdotal illustration, Yarbus suggested that human eye-movement patterns are modulated top down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013; Iqbal & Bailey, 2004), Greene, Liu, and Wolfe (2012) argued against it by reporting a failure. In this study, we perform a more systematic investigation of this problem, probing a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We perform two experiments. In the first experiment, we reanalyze the data from a previous study by Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye-movement features slightly but significantly above chance, using a Boosting classifier (34.12% correct vs. 25% chance level; binomial test, p = 1.0722e - 04). In the second experiment, we repeat and extend Yarbus's original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus's scene) under Yarbus's seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance-level; binomial test, p = 2.4535e - 06). We thus conclude that Yarbus's idea is supported by our data and continues to be an inspiration for future computational and experimental eye-movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future
引用
收藏
页数:22
相关论文
共 115 条
  • [1] Albert Mark V, 2012, Front Neurol, V3, P158, DOI 10.3389/fneur.2012.00158
  • [2] MEMORY REPRESENTATIONS IN NATURAL TASKS
    BALLARD, DH
    HAYHOE, MM
    PELZ, JB
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 1995, 7 (01) : 66 - 80
  • [3] The Effect of Expertise on Eye Movement Behaviour in Medical Image Perception
    Bertram, Raymond
    Helle, Laura
    Kaakinen, Johanna K.
    Svedstrom, Erkki
    [J]. PLOS ONE, 2013, 8 (06):
  • [4] Investigating task-dependent top-down effects on overt visual attention
    Betz, Torsten
    Kietzmann, Tim C.
    Wilming, Niklas
    Koenig, Peter
    [J]. JOURNAL OF VISION, 2010, 10 (03): : 1 - 14
  • [5] Borji A., 2014, IEEE T SYSTEM MAN 4
  • [6] Borji A., 2012, CVPR, DOI DOI 10.1109/CVPR.2012.6247706
  • [7] Analysis of scores, datasets, and models in visual saliency prediction
    Borji, Ali
    Tavakoli, Hamed R.
    Sihite, Dicky N.
    Itti, Laurent
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 921 - 928
  • [8] Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
    Borji, Ali
    Sihite, Dicky N.
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (01) : 55 - 69
  • [9] State-of-the-Art in Visual Attention Modeling
    Borji, Ali
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) : 185 - 207
  • [10] Borji A, 2012, PROC CVPR IEEE, P470, DOI 10.1109/CVPR.2012.6247710