RGB-D IBR: Rendering Indoor Scenes Using Sparse RGB-D Images with Local Alignments
被引:1
作者:
Jeong, Yeongyu
论文数: 0引用数: 0
h-index: 0
机构:
POSTECH, Pohang Si, Gyeongsangbuk D, South KoreaPOSTECH, Pohang Si, Gyeongsangbuk D, South Korea
Jeong, Yeongyu
[1
]
Kim, Haejoon
论文数: 0引用数: 0
h-index: 0
机构:
POSTECH, Pohang Si, Gyeongsangbuk D, South KoreaPOSTECH, Pohang Si, Gyeongsangbuk D, South Korea
Kim, Haejoon
[1
]
Seo, Hyewon
论文数: 0引用数: 0
h-index: 0
机构:
Univ Strasbourg, Strasbourg, FrancePOSTECH, Pohang Si, Gyeongsangbuk D, South Korea
Seo, Hyewon
[2
]
Cordier, Frederic
论文数: 0引用数: 0
h-index: 0
机构:
Univ Haute Alsace, Mulhouse, FrancePOSTECH, Pohang Si, Gyeongsangbuk D, South Korea
Cordier, Frederic
[3
]
Lee, Seungyong
论文数: 0引用数: 0
h-index: 0
机构:
POSTECH, Pohang Si, Gyeongsangbuk D, South KoreaPOSTECH, Pohang Si, Gyeongsangbuk D, South Korea
Lee, Seungyong
[1
]
机构:
[1] POSTECH, Pohang Si, Gyeongsangbuk D, South Korea
[2] Univ Strasbourg, Strasbourg, France
[3] Univ Haute Alsace, Mulhouse, France
来源:
PROCEEDINGS I3D 2016: 20TH ACM SIGGRAPH SYMPOSIUM ON INTERACTIVE 3D GRAPHICS AND GAMES
|
2016年
关键词:
image-based rendering;
RGB-D images;
local alignment;
3D navigation;
D O I:
10.1145/2856400.2876006
中图分类号:
TP301 [理论、方法];
学科分类号:
081202 ;
摘要:
This paper presents an image-based rendering (IBR) system based on RGB-D images. The input of our system consists of RGB-D images captured at sparse locations in the scene and can be expanded by adding new RGB-D images. The sparsity of RGB-D images increases the usability of our system as the user need not capture a RGB-D image stream in a single shot, which may require careful planning for a hand-held camera. Our system begins with a single RGB-D image and images are incrementally added one by one. For each newly added image, a batch process is performed to align it with previously added images. The process does not include a global alignment step, such as bundle adjustment, and can be completed quickly by computing only local alignments of RGBD images. Aligned images are represented as a graph, where each node is an input image and an edge contains relative pose information between nodes. A novel view image is rendered by picking the nearest input as the reference image and then blending the neighboring images based on depth information in real time. Experimental results with indoor scenes using Microsoft Kinect demonstrate that our system can synthesize high quality novel view images from a sparse set of RGB-D images.