RGB-D dense mapping with feature-based method

被引:0
|
作者
Fu, Xingyin [1 ,2 ,3 ,4 ]
Zhu, Feng [1 ,3 ,4 ]
Wu, Qingxiao [1 ,3 ,4 ]
Lu, Rongrong [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Shenyang Inst Automat, Shenyang 110016, Liaoning, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Key Lab Optoelect Informat Proc, Shenyang 110016, Liaoning, Peoples R China
[4] Key Lab Image Understanding & Comp Vis, Shenyang 110016, Liaoning, Peoples R China
关键词
dense SLAM; RGB-D camera; TSDF; reconstruction; real-time; SLAM;
D O I
10.1117/12.2505305
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Simultaneous Localization and Mapping (SLAM) plays an important role in navigation and augmented reality (AR) systems. While feature-based visual SLAM has reached a pre-mature stage, RGB-D-based dense SLAM becomes popular since the birth of consumer RGB-D cameras. Different with the feature-based visual SLAM systems, RGB-D-based dense SLAM systems, for example, KinectFusion, calculate camera poses by registering the current frame with the images raycasted from the global model and produce a dense surface by fusing the RGB-D stream. In this paper, we propose a novel reconstruction system. Our system is built on ORB-SLAM2. To generate the dense surface in real-time, we first propose to use truncated signed distance function (TSDF) to fuse the RGB-D frames. Because camera tracking drift is inevitable, it is unwise to represent the entire reconstruction space with a TSDF model or utilize the voxel hashing approach to represent the entire measured surface. We use moving volume proposed in Kintinuous to represent the reconstruction region around the current frame frustum. Different with Kintinuous which corrects the points with embedded deformation graph after pose graph optimization, we re-fuse the images with the optimized camera poses and produce the dense surface again after the user ends the scanning. Second, we use the reconstructed dense map to filter out the outliers of the features in the sparse feature map. The depth maps of the keyframes are raycasted from the TSDF volume according to the camera pose. The feature points in the local map are projected into the nearest keyframe. If the discrepancy between depth values of the feature and the corresponding point in the depth map exceeds the threshold, the feature is considered as an outlier and removed from the feature map. The discrepancy value is also combined with feature pyramid layer to calculate the information matrix when minimizing the reprojection error. The features in the sparse map reconstructed near the produced dense surface will impose large influence in camera tracking. We compare the accuracy of the produced camera trajectories as well as the 3D models to the state-of-the-art systems on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show our system achieves state-of-the-art results.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] roboSLAM: Dense RGB-D SLAM for Humanoid Robots
    Hourdakis, Emmanouil
    Piperakis, Stylianos
    Trahanias, Panos
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2224 - 2231
  • [42] Dense optical flow estimation from RGB-D
    Lv Jianxun
    Mohamed, Mahmoud A.
    Mertsching, Baerbel
    Yuan Haiwen
    2018 EIGHTH INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENT, COMPUTER, COMMUNICATION AND CONTROL (IMCCC 2018), 2018, : 328 - 333
  • [43] A dense RGB-D SLAM algorithm based on convolutional neural network of multi-layer image invariant feature
    Su, Yan
    Yu, Lei
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2022, 33 (02)
  • [44] Accurate and robust RGB-D dense mapping with inertial fusion and deformation-graph optimization
    Liu, Yong
    Bao, Liming
    Zhang, Chaofan
    Zhang, Wen
    Xia, Yingwei
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1691 - 1696
  • [45] Motion-Based Object Segmentation Based on Dense RGB-D Scene Flow
    Shao, Lin
    Shah, Parth
    Dwaracherla, Vikranth
    Bohg, Jeannette
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 3797 - 3804
  • [46] A Survey of the Simultaneous Localization and Mapping (Slam) Based on Rgb-D Camera
    Zhang, Zhifan
    Liu, Mengna
    Diao, Chen
    Chen, Shengyong
    2019 2ND INTERNATIONAL CONFERENCE ON MECHANICAL, ELECTRONIC AND ENGINEERING TECHNOLOGY (MEET 2019), 2019, : 48 - 58
  • [47] Combining ElasticFusion with PSPNet for RGB-D based Indoor Semantic Mapping
    Wang, Weiqi
    Yang, Jian
    You, Xiong
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 2996 - 3001
  • [48] Surfel-Based Dense RGB-D Reconstruction with Global and Local Consistency
    Yang, Yi
    Dong, Wei
    Kaess, Michael
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 5238 - 5244
  • [49] DYNAMIC POTATO IDENTIFICATION AND CLEANING METHOD BASED ON RGB-D
    Fu, Xiaoming
    Meng, Zhijun
    Wang, Zedong
    Yin, Xiaohui
    Wang, Chang
    ENGENHARIA AGRICOLA, 2022, 42 (03):
  • [50] Unsupervised Feature Learning for RGB-D Image Classification
    Jhuo, I-Hong
    Gao, Shenghua
    Zhuang, Liansheng
    Lee, D. T.
    Ma, Yi
    COMPUTER VISION - ACCV 2014, PT I, 2015, 9003 : 276 - 289