USD-SLAM: A Universal Visual SLAM Based on Large Segmentation Model in Dynamic Environments

被引:3
作者
Wang, Jingwei [1 ]
Ren, Yizhang [2 ]
Li, Zhiwei [1 ]
Xie, Xiaoming [1 ]
Chen, Zilong [3 ]
Shen, Tianyu [1 ]
Liu, Huaping [3 ]
Wang, Kunfeng [1 ]
机构
[1] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[2] Beihang Univ, Coll Instrumental Sci & Optoelect Engn, Beijing 100191, Peoples R China
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Image segmentation; Simultaneous localization and mapping; Dynamics; Three-dimensional displays; Motion segmentation; Semantics; Feature extraction; Vehicle dynamics; Location awareness; Cameras; SLAM; localization; object detection; segmentation and categorization; TRACKING;
D O I
10.1109/LRA.2024.3498781
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual Simultaneous Localization and Mapping (SLAM) has been widely adopted in autonomous driving and robotics. While most SLAM systems operate effectively in static or low-dynamic environments, achieving precise pose estimation in diverse unknown dynamic environments continues to pose a significant challenge. This letter introduces an advanced universal visual SLAM system (USD-SLAM) that combines a universal large segmentation model with a 3D spatial motion state constraint module to accurately handle any dynamic objects present in the environment. Our system first employs a large segmentation model guided by precise prompts to identify movable regions accurately. Based on the identified movable object regions, 3D spatial motion state constraints are exploited to remove the moving object regions. Finally, the moving object regions are excluded for subsequent tracking, localization, and mapping, ensuring stable and high-precision pose estimation. Experimental results demonstrate that our method can robustly operate in various dynamic and static environments without additional training, providing higher localization accuracy compared to other advanced dynamic SLAM systems.
引用
收藏
页码:11810 / 11817
页数:8
相关论文
共 27 条
[1]   DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM [J].
Bescos, Berta ;
Campos, Carlos ;
Tardos, Juan D. ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :5191-5198
[2]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[3]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[4]   HexPlane: A Fast Representation for Dynamic Scenes [J].
Cao, Ang ;
Johnson, Justin .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :130-141
[5]   A Real-Time Dynamic Object Segmentation Framework for SLAM System in Dynamic Scenes [J].
Chang, Jianfang ;
Dong, Na ;
Li, Donghui .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[6]   SG-SLAM: A Real-Time RGB-D Visual SLAM Toward Dynamic Scenes With Semantic and Geometric Information [J].
Cheng, Shuhong ;
Sun, Changhe ;
Zhang, Shijun ;
Zhang, Dianfan .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
[7]   Accurate Dynamic SLAM Using CRF-Based Long-Term Consistency [J].
Du, Zheng-Jun ;
Huang, Shi-Sheng ;
Mu, Tai-Jiang ;
Zhao, Qunhe ;
Martin, Ralph R. ;
Xu, Kun .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (04) :1745-1757
[8]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[9]   Towards Real-time Semantic RGB-D SLAM in Dynamic Environments [J].
Ji, Tete ;
Wang, Chen ;
Xie, Lihua .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :11175-11181
[10]   Design and Evaluation of a Generic Visual SLAM Framework for Multi Camera Systems [J].
Kaveti P. ;
Vaidyanathan S.N. ;
Chelvan A.T. ;
Singh H. .
IEEE Robotics and Automation Letters, 2023, 8 (11) :7368-7375