OpenDIBR: Open Real-Time Depth-Image-Based renderer of light field videos for VR

被引:0
|
作者
Julie Artois
Martijn Courteaux
Glenn Van Wallendael
Peter Lambert
机构
[1] IDLab-MEDIA,
[2] Ghent University - Imec,undefined
来源
Multimedia Tools and Applications | 2024年 / 83卷
关键词
Light field rendering; View synthesis; Depth-image-based rendering; Real time; Virtual Reality;
D O I
暂无
中图分类号
学科分类号
摘要
In this work, we present a novel light field rendering framework that allows a viewer to walk around a virtual scene reconstructed from a multi-view image/video dataset with visual and depth information. With immersive media applications in mind, the framework is designed to support dynamic scenes through input videos, give the viewer full freedom of movement in a large area, and achieve real-time rendering, even in Virtual Reality (VR). This paper explores how Depth-Image-Based Rendering (DIBR) is one of the few state-of-the-art techniques that achieves all requirements. We therefor implemented OpenDIBR, an Openly available DIBR, as a proof of concept for the framework. It uses Nvidia’s Video Codec SDK to rapidly decode the color and depth videos on the GPU. The decoded depth maps and color frames are then warped to the output view in OpenGL. Each input contribution is blended together through a per-pixel weighted average depending on the input and output camera positions. Experiments comparing the visual quality conclude that OpenDIBR is, objectively and subjectively, similar to TMIV and better than NeRF. Performancewise, OpenDIBR runs at 90 Hz for up to 4 full HD input videos on desktop, or 2–4 in VR, and there are options to further increase this by lowering the video bitrates, reducing the depth map resolution or dynamically lowering the number of rendered input videos.
引用
收藏
页码:25797 / 25815
页数:18
相关论文
共 50 条
  • [1] OpenDIBR: Open Real-Time Depth-Image-Based renderer of light field videos for VR
    Artois, Julie
    Courteaux, Martijn
    Van Wallendael, Glenn
    Lambert, Peter
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 25797 - 25815
  • [2] Real-time Depth-Image-Based Rendering on GPU
    Sun, Zengzeng
    Jung, Cheolkon
    2015 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY, 2015, : 324 - 328
  • [3] Real-Time Depth-Image-Based Rendering for 3DTV Using OpenCL
    de Albuquerque Azevedo, Roberto Gerson
    Ismerio, Fernando
    Raposo, Alberto Barbosa
    Gomes Soares, Luiz Fernando
    ADVANCES IN VISUAL COMPUTING (ISVC 2014), PT 1, 2014, 8887 : 97 - 106
  • [4] Real-Time Rendering Method of Depth-Image-Based Multiple Reference Views for Integral Imaging Display
    Guan, Yanxin
    Sang, Xinzhu
    Xing, Shujun
    Li, Yuanhang
    Yan, Binbin
    IEEE ACCESS, 2019, 7 : 170545 - 170552
  • [5] A real-time low-latency hardware light-field renderer
    Regan, MJP
    Miller, GSP
    Rubin, SM
    Kogelnik, C
    SIGGRAPH 99 CONFERENCE PROCEEDINGS, 1999, : 287 - 290
  • [6] A real-time sound field renderer based on digital Huygens' model
    Tan Yiyu
    Inoguchi, Yasushi
    Sugawara, Eiko
    Otani, Makoto
    Iwaya, Yukio
    Sato, Yukinori
    Matsuoka, Hiroshi
    Tsuchiya, Takao
    JOURNAL OF SOUND AND VIBRATION, 2011, 330 (17) : 4302 - 4312
  • [7] A Compact Light Field Camera for Real-Time Depth Estimation
    Anisimov, Yuriy
    Wasenmuller, Oliver
    Stricker, Didier
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, CAIP 2019, PT I, 2019, 11678 : 52 - 63
  • [8] Sound field renderer with loudspeaker array using real-time convolver
    Tsuchiya, Takao
    Takenuki, Issei
    Sugiura, Kyousuke
    2018 AES INTERNATIONAL CONFERENCE ON SPATIAL REPRODUCTION - AESTHETICS AND SCIENCE, 2018,
  • [9] Real-Time Image-based Smoke Detection in Endoscopic Videos
    Leibetseder, Andreas
    Primus, Manfred Jurgen
    Petscharnig, Stefan
    Schoeffmann, Klaus
    PROCEEDINGS OF THE THEMATIC WORKSHOPS OF ACM MULTIMEDIA 2017 (THEMATIC WORKSHOPS'17), 2017, : 296 - 304
  • [10] GPU based real-time rendering of spherical depth image
    Zhu, Jian
    Wu, En-Hua
    Jisuanji Xuebao/Chinese Journal of Computers, 2009, 32 (02): : 231 - 240