TSG-SLAM: SLAM Employing Tight Coupling of Instance Segmentation and Geometric Constraints in Complex Dynamic Environments

被引:4
作者
Zhang, Yongchao [1 ]
Li, Yuanming [2 ,3 ]
Chen, Pengzhan [1 ,3 ]
Xie, Yuanlong
Zheng, Shiqi
Hu, Zhaozheng
Wang, Shuting
机构
[1] Taizhou Univ, Sch Intelligent Mfg, Taizhou 318000, Peoples R China
[2] Ganzhou Polytech, Dept Elect Engn, Ganzhou 341000, Peoples R China
[3] East China Jiaotong Univ, Sch Elect & Automat Engn, Nanchang 330013, Peoples R China
基金
中国国家自然科学基金;
关键词
SLAM; complex dynamic environment; fundamental matrix; semantic segmentation; multi-view geometric constraint; RGB-D SLAM; MOTION REMOVAL; MONOCULAR SLAM; TRACKING; RECONSTRUCTION; ODOMETRY;
D O I
10.3390/s23249807
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Although numerous effective Simultaneous Localization and Mapping (SLAM) systems have been developed, complex dynamic environments continue to present challenges, such as managing moving objects and enabling robots to comprehend environments. This paper focuses on a visual SLAM method specifically designed for complex dynamic environments. Our approach proposes a dynamic feature removal module based on the tight coupling of instance segmentation and multi-view geometric constraints (TSG). This method seamlessly integrates semantic information with geometric constraint data, using the fundamental matrix as a connecting element. In particular, instance segmentation is performed on frames to eliminate all dynamic and potentially dynamic features, retaining only reliable static features for sequential feature matching and acquiring a dependable fundamental matrix. Subsequently, based on this matrix, true dynamic features are identified and removed by capitalizing on multi-view geometry constraints while preserving reliable static features for further tracking and mapping. An instance-level semantic map of the global scenario is constructed to enhance the perception and understanding of complex dynamic environments. The proposed method is assessed on TUM datasets and in real-world scenarios, demonstrating that TSG-SLAM exhibits superior performance in detecting and eliminating dynamic feature points and obtains good localization accuracy in dynamic environments.
引用
收藏
页数:25
相关论文
共 45 条
[1]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[2]   DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM [J].
Bescos, Berta ;
Campos, Carlos ;
Tardos, Juan D. ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :5191-5198
[3]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[4]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[5]   MonoSLAM: Real-time single camera SLAM [J].
Davison, Andrew J. ;
Reid, Ian D. ;
Molton, Nicholas D. ;
Stasse, Olivier .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (06) :1052-1067
[6]   TwistSLAM: Constrained SLAM in Dynamic Environment [J].
Gonzalez, Mathieu ;
Marchand, Eric ;
Kacete, Amine ;
Royan, Jerome .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) :6846-6853
[7]  
He KM, 2020, IEEE T PATTERN ANAL, V42, P386, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
[8]   Object-RPE: Dense 3D reconstruction and pose estimation with convolutional neural networks [J].
Hoang, Dinh-Cuong ;
Lilienthal, Achim J. ;
Stoyanov, Todor .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 133
[9]   OctoMap: an efficient probabilistic 3D mapping framework based on octrees [J].
Hornung, Armin ;
Wurm, Kai M. ;
Bennewitz, Maren ;
Stachniss, Cyrill ;
Burgard, Wolfram .
AUTONOMOUS ROBOTS, 2013, 34 (03) :189-206
[10]  
Hosseinzadeh M, 2019, IEEE INT CONF ROBOT, P7123, DOI [10.1109/ICRA.2019.8793728, 10.1109/icra.2019.8793728]