Real-Time Depth Video-Based Rendering for 6-DoF HMD Navigation and Light Field Displays

被引:14
作者
Bonatto, Daniele [1 ,2 ]
Fachada, Sarah [1 ]
Rogge, Segolene [2 ]
Munteanu, Adrian [2 ]
Lafruit, Gauthier [1 ]
机构
[1] Univ Libre Bruxelles, Lab Image Synth & Anal LISA, B-1050 Brussels, Belgium
[2] Vrije Univ Brussels, Dept Elect & Informat ETRO, B-1050 Brussels, Belgium
基金
欧盟地平线“2020”;
关键词
Rendering (computer graphics); Real-time systems; Navigation; Light fields; Resists; Three-dimensional displays; Streaming media; Virtual reality; stereo image processing; stereo vision; free viewpoint navigation; reference view synthesizer; real-time view synthesis; IMAGE; COLOR; DIBR;
D O I
10.1109/ACCESS.2021.3123529
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a novel approach to provide immersive free navigation with 6 Degrees of Freedom in real-time for natural and virtual scenery, for both static and dynamic content. Stemming from the state-of-the-art in Depth Image-Based Rendering and the OpenGL pipeline, this new View Synthesis method achieves free navigation at up to 90 FPS and can take any number of input views with their corresponding depth maps as priors. Video content can be played thanks to GPU decompression, supporting free navigation with full parallax in real-time. To render a novel viewpoint, each selected input view is warped using the camera pose and associated depth map, using an implicit 3D representation. The warped views are then blended all together to generate the chosen virtual view. Various view blending approaches specifically designed to avoid visual artifacts are compared. Using as few as four input views appears to be an optimal trade-off between computation time and quality, allowing to synthesize high-quality stereoscopic views in real-time, offering a genuine immersive virtual reality experience. Additionally, the proposed approach provides high-quality rendering of a 3D scenery on holographic light field displays. Our results are comparable - objectively and subjectively - to the state of the art view synthesis tools NeRF and LLFF, while maintaining an overall lower complexity and real-time rendering.
引用
收藏
页码:146868 / 146887
页数:20
相关论文
共 88 条
  • [1] Building Rome in a Day
    Agarwal, Sameer
    Furukawa, Yasutaka
    Snavely, Noah
    Simon, Ian
    Curless, Brian
    Seitz, Steven M.
    Szeliski, Richard
    [J]. COMMUNICATIONS OF THE ACM, 2011, 54 (10) : 105 - 112
  • [2] EPIPOLAR-PLANE IMAGE-ANALYSIS - AN APPROACH TO DETERMINING STRUCTURE FROM MOTION
    BOLLES, RC
    BAKER, HH
    MARIMONT, DH
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 1987, 1 (01) : 7 - 55
  • [3] Bonatto D., ULB TOYS TABLE, DOI [10.5281/zenodo.5055543, DOI 10.5281/ZENODO.5055543]
  • [4] Bonatto D., 2017, JTC1SC29WG11 ISOIEC
  • [5] Bonatto D., 2020, ELECT IMAGING, DOI 10.2352/ISSN.2470-1173.2020.13. ERVR-382
  • [6] Bonatto D., 2018, P IEEE 3DTV C TRUE V, P1, DOI [10.1109/3DTV.2018.8478484, DOI 10.1109/3DTV.2018.8478484]
  • [7] Bonatto D, 2016, INT CONF 3D IMAG
  • [8] Botsch M., 2004, Proceedings Eurographics Symposium on Point-Based Graphics 2004, P25, DOI 10.5555/2386332.2386338
  • [9] Immersive Light Field Video with a Layered Mesh Representation
    Broxton, Michael
    Flynn, John
    Overbeck, Ryan
    Erickson, Daniel
    Hedman, Peter
    Duvall, Matthew
    Dourgarian, Jason
    Busch, Jay
    Whalen, Matt
    Debevec, Paul
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04):
  • [10] Buehler C, 2001, COMP GRAPH, P425, DOI 10.1145/383259.383309