Inferring user tasks in pedestrian navigation from eye movement data in real-world environments

被引:47
作者
Liao, Hua [1 ,2 ,3 ]
Dong, Weihua [1 ]
Huang, Haosheng [4 ]
Gartner, Georg [2 ,3 ]
Liu, Huiping [1 ]
机构
[1] Beijing Normal Univ, Beijing Key Lab Remote Sensing Environm & Digital, State Key Lab Remote Sensing Sci, Beijing, Peoples R China
[2] Beijing Normal Univ, Fac Geog Sci, Beijing, Peoples R China
[3] Vienna Univ Technol, Dept Geodesy & Geoinformat, Vienna, Austria
[4] Univ Zurich, Dept Geog, GISci Ctr, Zurich, Switzerland
基金
中国国家自然科学基金;
关键词
Wayfinding; random forests; task inference; eye tracking; machine learning; OBSERVERS TASK; TRACKING; ATTENTION; GUIDANCE; YARBUS; MAPS;
D O I
10.1080/13658816.2018.1482554
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Eye movement data convey a wealth of information that can be used to probe human behaviour and cognitive processes. To date, eye tracking studies have mainly focused on laboratory-based evaluations of cartographic interfaces; in contrast, little attention has been paid to eye movement data mining for real-world applications. In this study, we propose using machine-learning methods to infer user tasks from eye movement data in real-world pedestrian navigation scenarios. We conducted a real-world pedestrian navigation experiment in which we recorded eye movement data from 38 participants. We trained and cross-validated a random forest classifier for classifying five common navigation tasks using five types of eye movement features. The results show that the classifier can achieve an overall accuracy of 67%. We found that statistical eye movement features and saccade encoding features are more useful than the other investigated types of features for distinguishing user tasks. We also identified that the choice of classifier, the time window size and the eye movement features considered are all important factors that influence task inference performance. Results of the research open doors to some potential real-world innovative applications, such as navigation systems that can provide task-related information depending on the task a user is performing.
引用
收藏
页码:739 / 763
页数:25
相关论文
共 57 条
  • [1] Gaze-Informed location-based services
    Anagnostopoulos, V.
    Havlena, M.
    Kiefer, P.
    Giannopoulos, I.
    Schindler, K.
    Raubal, M.
    [J]. INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2017, 31 (09) : 1770 - 1797
  • [2] [Anonymous], 2013, Eye gaze in intelligent user interfaces: Gaze-based analyses, models and applications
  • [3] [Anonymous], 2001, The elements of statistical learning: data mining, inference, and prediction
  • [4] [Anonymous], 2013, P SIGCHI C HUMAN FAC, DOI [10.1145/2470654.2470697, DOI 10.1145/2470654.2470697]
  • [5] Bednarik Roman., 2012, Proceedings of the symposium on eye tracking research and applications, P83, DOI [DOI 10.1145/2168556.2168569, 10.1145/2168556.2168569.]
  • [6] Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features
    Boisvert, Jonathan F. G.
    Bruce, Neil D. B.
    [J]. NEUROCOMPUTING, 2016, 207 : 653 - 668
  • [7] Defending Yarbus: Eye movements reveal observers' task
    Borji, Ali
    Itti, Laurent
    [J]. JOURNAL OF VISION, 2014, 14 (03):
  • [8] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [9] Bulling A., 2011, IEEE T PATTERN ANAL, V33, P741, DOI DOI 10.1109/TPAMI.2010.86
  • [10] Overcoming challenges in developing more usable pedestrian navigation systems
    Delikostidis, Ioannis
    van Elzakker, Corne P. J. M.
    Kraak, Menno-Jan
    [J]. CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE, 2016, 43 (03) : 189 - 207