Visual SLAM Method Based on Motion Segmentation

被引:0
作者
Chen, Qingzi [1 ]
Zhu, Lingyun [1 ]
Liu, Jirui [1 ]
机构
[1] Chongqing Univ Technol, Chongqing, Peoples R China
来源
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024 | 2024年
关键词
Visual SLAM; Motion Segmentation; Instance Segmentation; Mask Image;
D O I
10.1109/ICCEA62105.2024.10604177
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Traditional simultaneous localization and mapping (SLAM) algorithms are prone to interference from dynamic objects in real-world scenarios, resulting in poor algorithm robustness and low localization accuracy. In this paper, a visual SLAM method based on motion segmentation is proposed, building upon the ORB-SLAM3 framework. First, introducing a motion segmentation method Rigidmask to detect potential dynamic objects and generate dynamic object mask images, which combines geometric constraints, optical flow estimation, and depth estimation techniques. Meanwhile, YOLO-World is employed for instance segmentation to obtain object mask images. Subsequently, correspondence matching between the two types of mask images is conducted to enhance motion segmentation accuracy. Finally, the dynamic feature points are eliminated, and the remaining feature points are used for pose matching and estimation. Experimental results on the TUM data set show that compared with ORB-SLAM3, under highly dynamic sequences, the absolute trajectory error (ATE) of this method is increased by more than 89%. At the same time, compared with some current mainstream visual SLAM algorithms in dynamic scenes, the positioning accuracy has also been improved.
引用
收藏
页码:891 / 896
页数:6
相关论文
共 11 条
[1]   DOT: Dynamic Object Tracking for Visual SLAM [J].
Ballester, Irene ;
Fontan, Alejandro ;
Civera, Javier ;
Strobl, Klaus H. ;
Triebel, Rudolph .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :11705-11711
[2]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[3]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[4]   SLAM Overview: From Single Sensor to Heterogeneous Fusion [J].
Chen, Weifeng ;
Zhou, Chengjun ;
Shang, Guangtao ;
Wang, Xiyang ;
Li, Zhenxiong ;
Xu, Chonghui ;
Hu, Kai .
REMOTE SENSING, 2022, 14 (23)
[5]  
Cheng TH, 2024, Arxiv, DOI [arXiv:2401.17270, DOI 10.48550/ARXIV.2401.17270]
[6]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[7]  
Palazzolo E, 2019, IEEE INT C INT ROBOT, P7855, DOI [10.1109/IROS40897.2019.8967590, 10.1109/iros40897.2019.8967590]
[8]   YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint [J].
Wu, Wenxin ;
Guo, Liang ;
Gao, Hongli ;
You, Zhichao ;
Liu, Yuekai ;
Chen, Zhiqiang .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (08) :6011-6026
[9]   Learning to Segment Rigid Motions from Two Frames [J].
Yang, Gengshan ;
Ramanan, Deva .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1266-1275
[10]  
Yu C, 2018, IEEE INT C INT ROBOT, P1168, DOI 10.1109/IROS.2018.8593691