Real-World Virtual Reality With Head-Motion Parallax

被引:1
作者
Thatte, Jayant [1 ]
Girod, Bernd [2 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Stanford Univ, Elect Engn, Stanford, CA 94305 USA
关键词
D O I
10.1109/MCG.2021.3082041
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Most of the real-world virtual reality (VR) content available today is captured and rendered from a fixed vantage point. The visual-vestibular conflict arising from the lack of head-motion parallax degrades the feeling of presence in the virtual environment and has been shown to induce nausea and visual discomfort. We present an end-to-end framework for VR with head-motion parallax for real-world scenes. To capture both horizontally and vertically separated perspectives, we use a camera rig with two vertically stacked rings of outward-facing cameras. The data from the rig are processed offline and stored into a compact intermediate representation, which is used to render novel views for a head-mounted display, in accordance with the viewer's head movements. We compare two promising intermediate representations-Stacked OmniStereo and Layered Depth Panoramas-and evaluate them in terms of objective image quality metrics and the occurrence of disocclusion holes in synthesized novel views.
引用
收藏
页码:29 / 39
页数:11
相关论文
共 22 条
[21]  
Zheng KeColin., 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition, P1, DOI 10.1109/CVPR.2007.383295
[22]   High-quality video view interpolation using a layered representation [J].
Zitnick, CL ;
Kang, SB ;
Uyttendaele, M ;
Winder, S ;
Szeliski, R .
ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03) :600-608