Robust Tracking and Clean Background Dense Reconstruction for RGB-D SLAM in a Dynamic Indoor Environment

被引:1
作者
Zhu, Fengbo [1 ]
Zheng, Shunyi [1 ]
Huang, Xia [1 ]
Wang, Xiqi [1 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
基金
中国国家自然科学基金;
关键词
dynamic scene; semantic SLAM; RGB-D; mask refinement; background reconstruction; VISUAL SLAM;
D O I
10.3390/machines10100892
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This article proposes a two-stage simultaneous localization and mapping (SLAM) method based on using the red green blue-depth (RGB-D) camera in dynamic environments, which can not only improve tracking robustness and trajectory accuracy but also reconstruct a clean and dense static background model in dynamic environments. In the first stage, to accurately exclude the interference of features in the dynamic region from the tracking, the dynamic object mask is extracted by Mask-RCNN and optimized by using the connected component analysis method and a reference frame-based method. Then, the feature points, lines, and planes in the nondynamic object area are used to construct an optimization model to improve the tracking accuracy and robustness. After the tracking is completed, the mask is further optimized by the multiview projection method. In the second stage, to accurately obtain the pending area, which contains the dynamic object area and the newly added area in each frame, a method is proposed, which is based on a ray-casting algorithm and fully uses the result of the first stage. To extract the static region from the pending region, this paper designs divisible and indivisible regions process methods and the bounding box tracking method. Then, the extracted static regions are merged into the map using the truncated signed distance function method. Finally, the clean static background model is obtained. Our methods have been verified on public datasets and real scenes. The results show that the presented methods achieve comparable or better trajectory accuracy and the best robustness, and can construct a clean static background model in a dynamic scene.
引用
收藏
页数:26
相关论文
共 37 条
[1]   DDL-SLAM: A Robust RGB-D SLAM in Dynamic Environments Combined With Deep Learning [J].
Ai, Yongbao ;
Rui, Ting ;
Lu, Ming ;
Fu, Lei ;
Liu, Shuai ;
Wang, Song .
IEEE ACCESS, 2020, 8 :162335-162342
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[4]  
Brasch N, 2018, IEEE INT C INT ROBOT, P393, DOI 10.1109/IROS.2018.8593828
[5]   A Real-Time Dynamic Object Segmentation Framework for SLAM System in Dynamic Scenes [J].
Chang, Jianfang ;
Dong, Na ;
Li, Donghui .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[6]   Improving monocular visual SLAM in dynamic environments: an optical-flow-based approach [J].
Cheng, Jiyu ;
Sun, Yuxiang ;
Meng, Max Q-H .
ADVANCED ROBOTICS, 2019, 33 (12) :576-589
[7]   DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes [J].
Cheng, Junhao ;
Wang, Zhi ;
Zhou, Hongyan ;
Li, Li ;
Yao, Jian .
ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2020, 9 (04)
[8]   SDF-SLAM: Semantic Depth Filter SLAM for Dynamic Environments [J].
Cui, Linyan ;
Ma, Chaowei .
IEEE ACCESS, 2020, 8 :95301-95311
[9]   SOF-SLAM: A Semantic Visual SLAM for Dynamic Environments [J].
Cui, Linyan ;
Ma, Chaowei .
IEEE ACCESS, 2019, 7 :166528-166539
[10]   RGB-D SLAM in Dynamic Environments Using Point Correlations [J].
Dai, Weichen ;
Zhang, Yu ;
Li, Ping ;
Fang, Zheng ;
Scherer, Sebastian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :373-389