A Robust and Real-Time RGB-D SLAM Method with Dynamic Point Recognition and Depth Segmentation Optimization

被引:0
作者
Chen, Shuaixin [1 ]
Gan, Baolin [1 ]
Zhang, Congxuan [1 ]
Chen, Zhen [1 ]
Lu, Ke [2 ]
Lu, Feng
机构
[1] Nanchang Hangkong Univ, Nanchang 330063, Jiangxi, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PT IX, PRCV 2024 | 2025年 / 15039卷
基金
中国国家自然科学基金;
关键词
SLAM; dynamic environments; optical flow; spatial distribution pattern; depth segmentation; ORB-SLAM; RECONSTRUCTION; TRACKING;
D O I
10.1007/978-981-97-8692-3_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simultaneous localization and mapping (SLAM), as one of the core prerequisite technologies for intelligent mobile robots, has attracted much attention in recent years. However, the applicability of SLAM algorithms in practical scenarios is limited due to the strict assumptions of static environments. Although many recent SLAM systems have been devoted to introducing semantic segmentation or object detection schemes to identify dynamic regions, these methods fail to detect regions with unknown semantics and are highly time-consuming. To address the abovementioned problems, we propose a robust and real-time RGB-D SLAM method with dynamic point recognition and depth segmentation optimization. Specifically, we first plan a self-adaptive feature point tracking scheme based on sparse optical flows, which accelerates the feature point tracking process and avoids local optima. Then, we design a dynamic feature point recognition model that uses motion information and spatial distribution patterns to distinguish between dynamic and static point clusters. Finally, we exploit a depth segmentation optimization scheme to recover the misclassified feature points, which further improves the SLAM performance of the model. The experimental results obtained from a comparison between our method and several state-of-the-art (SOTA) models demonstrate that the proposed method achieves the best performance among the geometry-based methods and performs competitively relative to the deep learning-based models, especially in highly dynamic environments.
引用
收藏
页码:247 / 261
页数:15
相关论文
共 27 条
[1]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[2]   Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving [J].
Bresson, Guillaume ;
Alsayed, Zayed ;
Yu, Li ;
Glaser, Sebastien .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2017, 2 (03) :194-220
[3]   SG-SLAM: A Real-Time RGB-D Visual SLAM Toward Dynamic Scenes With Semantic and Geometric Information [J].
Cheng, Shuhong ;
Sun, Changhe ;
Zhang, Shijun ;
Zhang, Dianfan .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[4]   RGB-D SLAM in Dynamic Environments Using Point Correlations [J].
Dai, Weichen ;
Zhang, Yu ;
Li, Ping ;
Fang, Zheng ;
Scherer, Sebastian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :373-389
[5]   ORB-SLAM2S: A Fast ORB-SLAM2 System with Sparse Optical Flow Tracking [J].
Diao, Yufeng ;
Cen, Ruping ;
Xue, Fangzheng ;
Su, Xiaojie .
2021 13TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2021, :160-165
[6]  
Ester M., 1996, Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining, P226, DOI DOI 10.5555/3001460.3001507
[7]   Fast ORB-SLAM Without Keypoint Descriptors [J].
Fu, Qiang ;
Yu, Hongshan ;
Wang, Xiaolong ;
Yang, Zhengeng ;
He, Yong ;
Zhang, Hong ;
Mian, Ajmal .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :1433-1446
[8]  
Garcia-Fidalgo Emilio, 2018, IEEE Robotics and Automation Letters, V3, P3051, DOI [10.1109/lra.2018.2849609, 10.1109/LRA.2018.2849609]
[9]  
He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
[10]  
Huang Y, 2020, Arxiv, DOI arXiv:2006.06091