Robust and efficient cpu-based rgb-d scene reconstruction

被引:0
作者
Li J. [1 ,2 ]
Gao W. [1 ,2 ]
Li H. [1 ,2 ]
Tang F. [1 ,2 ]
Wu Y. [1 ,2 ]
机构
[1] National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing
[2] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
来源
Gao, Wei (wgao@nlpr.ia.ac.cn) | 2018年 / MDPI AG卷 / 18期
基金
中国国家自然科学基金;
关键词
3D reconstruction; Camera tracking; Simultaneous localization and mapping (SLAM); Volumetric integration;
D O I
10.3390/S18113652
中图分类号
学科分类号
摘要
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.
引用
收藏
相关论文
共 38 条
[1]  
Izadi S., Kim D., Hilliges O., Molyneaux D., Newcombe R., Kohli P., Shotton J., Hodges S., Freeman D., Davison A., KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559-568
[2]  
Newcombe R.A., Izadi S., Hilliges O., Molyneaux D., Kim D., Davison A.J., Kohi P., Shotton J., Hodges S., Fitzgibbon A., KinectFusion: Real-time dense surface mapping and tracking, Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136
[3]  
Curless B., Levoy M., A volumetric method for building complex models from range images, Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp. 303-312, (1996)
[4]  
Rusinkiewicz S., Levoy M., Efficient variants of the ICP algorithm, Proceedings of the International Conference on 3-D Digital Imaging and Modeling, (2001)
[5]  
Whelan T., Kaess M., Fallon M., Johannsson H., Leonard J., Mcdonald J., Kintinuous: Spatially extended KinectFusion, Proceedings of the RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, (2012)
[6]  
Whelan T., Kaess M., Leonard J.J., McDonald J., Deformation-based loop closure for large scale dense RGB-D SLAM, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 548-555
[7]  
Kerl C., Sturm J., Cremers D., Robust odometry estimation for RGB-D cameras, Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3748-3754
[8]  
Kerl C., Sturm J., Cremers D., Dense visual SLAM for RGB-D cameras, Proceedings of the International Conference on Intelligent Robots and Systems, pp. 2100-2106
[9]  
Whelan T., Salas-Moreno R. F., Glocker B., Davison A. J., Leutenegger S., ElasticFusion: Real-time dense SLAM and light source estimation, Int. J. of Robot. Res, 35, pp. 1697-1716, (2016)
[10]  
Prisacariu V.A., Kahler O., Golodetz S., Sapienza M., Cavallari T., Torr P.H.S., Murray D.W., Infinitam v3: A framework for large-scale 3D reconstruction with loop closure, (2017)