What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality?

被引:9
作者
David, Erwan Joel [1 ]
Lebranchu, Pierre [2 ,3 ]
Da Silva, Matthieu Perreira [2 ]
Le Callet, Patrick [2 ]
机构
[1] Dept Psychol, Goethe Univ, Frankfurt, Germany
[2] Univ Nantes, LS2N UMR CNRS 6004, Nantes, France
[3] Nantes Univ Hosp, Nantes, France
关键词
EYE-HEAD COORDINATION; GAZE SHIFTS; TIME-COURSE; BOTTOM-UP; MOVEMENTS; FIXATION; PERCEPTION; SACCADES; VISION; REPRESENTATIONS;
D O I
10.1167/jov.22.4.12
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (>80 degrees) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360 degrees context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
引用
收藏
页数:27
相关论文
共 130 条
[1]   Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies [J].
Aguilar, Carlos ;
Castet, Eric .
VISION RESEARCH, 2011, 51 (09) :997-1012
[2]   Latency Requirements for Foveated Rendering in Virtual Reality [J].
Albert, Rachel ;
Patney, Anjul ;
Luebke, David ;
Kim, Joohwan .
ACM TRANSACTIONS ON APPLIED PERCEPTION, 2017, 14 (04)
[3]   Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality [J].
Anderson, Nicola C. ;
Bischof, Walter F. ;
Foulsham, Tom ;
Kingstone, Alan .
JOURNAL OF VISION, 2020, 20 (08)
[4]   Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction [J].
Anh Nguyen ;
Yan, Zhisheng ;
Nahrstedt, Klara .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :1190-1198
[5]   TIME COURSE OF PICTURE VIEWING [J].
ANTES, JR .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1974, 103 (01) :62-70
[6]   Does Glaucoma Alter Eye Movements When Viewing Images of Natural Scenes? A Between-Eye Study [J].
Asfaw, Daniel S. ;
Jones, Pete R. ;
Monter, Vera M. ;
Smith, Nicholas D. ;
Crabb, David P. .
INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2018, 59 (08) :3189-3198
[7]  
Asher W., 1898, GRAEF ARCH CLIN EXP, V47, P318, DOI 10.1007/BF02283764
[8]   Mixed-effects modeling with crossed random effects for subjects and items [J].
Baayen, R. H. ;
Davidson, D. J. ;
Bates, D. M. .
JOURNAL OF MEMORY AND LANGUAGE, 2008, 59 (04) :390-412
[9]  
BAHILL AT, 1975, INVEST OPHTH VISUAL, V14, P468
[10]   VESTIBULO-OCULAR FUNCTION DURING COORDINATED HEAD AND EYE-MOVEMENTS TO ACQUIRE VISUAL TARGETS [J].
BARNES, GR .
JOURNAL OF PHYSIOLOGY-LONDON, 1979, 287 (FEB) :127-147