Prediction of effort and eye movement measures from driving scene components

被引:11
作者
Cabrall, Christopher D. D. [1 ]
Happee, Riender [1 ]
de Winter, Joost C. F. [1 ]
机构
[1] Delft Univ Technol, Dept Cognit Robot, Mekelweg 2, NL-2628 CD Delft, Netherlands
关键词
Driving scenes; Eye-tracking; Workload; Driver assistance; Automated driving; Individual differences; DRIVERS;
D O I
10.1016/j.trf.2019.11.001
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:187 / 197
页数:11
相关论文
共 50 条
  • [31] The Multiple Object Avoidance (MOA) task measures attention for action: Evidence from driving and sport
    Mackenzie, Andrew K.
    Vernon, Mike L.
    Cox, Paul R.
    Crundall, David
    Daly, Rosie C.
    Guest, Duncan
    Muhl-Richardson, Alexander
    Howard, Christina J.
    BEHAVIOR RESEARCH METHODS, 2022, 54 (03) : 1508 - 1529
  • [32] Does mood influence text processing and comprehension? Evidence from an eye-movement study
    Scrimin, Sara
    Mason, Lucia
    BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY, 2015, 85 (03) : 387 - 406
  • [33] The influence of recent scene events on spoken comprehension: Evidence from eye movements
    Knoeferle, Pia
    Crocker, Matthew W.
    JOURNAL OF MEMORY AND LANGUAGE, 2007, 57 (04) : 519 - 543
  • [34] The coordinated interplay of scene, utterance, and world knowledge: Evidence from eye tracking
    Knoeferle, Pia
    Crocker, Matthew W.
    COGNITIVE SCIENCE, 2006, 30 (03) : 481 - 529
  • [35] Less Direct, More Analytical: Eye-Movement Measures of L2 Idiom Reading
    Senaldi, Marco S. G.
    Titone, Debra A.
    LANGUAGES, 2022, 7 (02)
  • [36] EYE MOVEMENT ANALYSIS IN DYNAMIC SCENES: PRESENTATION AND APPLICATION OF DIFFERENT METHODS IN BEND TAKING DURING CAR DRIVING
    Navarro, Jordan
    Reynaud, Emanuelle
    Gabaude, Catherine
    TRAVAIL HUMAIN, 2017, 80 (03): : 307 - 330
  • [37] A FRAMEWORK FOR HUMAN-ROBOT TEAMING PERFORMANCE PREDICTION: REINFORCEMENT LEARNING AND EYE MOVEMENT ANALYSIS
    Galvani, Gustavo
    Korivand, Soroush
    Ajoudani, Arash
    Gong, Jiaqi
    Jalili, Nader
    PROCEEDINGS OF ASME 2023 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2023, VOL 3, 2023,
  • [38] Effects of different non-driving-related-task display modes on drivers' eye-movement patterns during take-over in an automated vehicle
    Li, Xiaomeng
    Schroeter, Ronald
    Rakotonirainy, Andry
    Kuo, Jonny
    Lenne, Michael G.
    TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2020, 70 : 135 - 148
  • [39] What Eye Movement Reveals Concerning Age-Related Dissociation in Syntactic Prediction: Evidence From a Verb-Final Language
    Oh, Se Jin
    Sung, Jee Eun
    Lee, Sung Eun
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2022, 65 (05): : 2235 - 2257
  • [40] Risk prediction model using eye movements during simulated driving with logistic regressions and neural networks
    Costela, Francisco M.
    Castro-Torres, Jose J.
    TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2020, 74 (74) : 511 - 521