Visual SLAM With Multi-Object Tracking Using Hybrid Features

被引:0
作者
Li, Weidong [1 ]
Wen, Penghai [1 ]
Wang, Min [1 ]
Dai, Shi-Lu [1 ]
机构
[1] South China Univ Technol, Sch Automat Sci & Engn, Guangzhou 510641, Peoples R China
来源
2024 43RD CHINESE CONTROL CONFERENCE, CCC 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Visual SLAM; Semantic SLAM; Object Tracking;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual simultaneous localization and mapping (SLAM) enables ego localization and mapping, and provides critical technical support in autonomous driving and mobile robots. However, the assumption of rigid bodies usually made in the SLAM literature limits performances in dynamic environments. Moreover, incorporating surrounding objects into the unified estimation framework can enhance environmental understanding. In this paper, we present a unified SLAM system based on stereo cameras, which estimates ego pose and tracks multiple objects in dynamic environments. Our system separately estimates ego pose and tracks multiple objects, and creates individual local maps. To identify dynamic points, we calculate dynamic probabilities for each feature and subsequently refine them using local bundle adjustment (BA). For multi-object tracking, we utilize motion information and depth estimation for initialization, and construct an object local BA that includes hybrid feature re-projection factors and motion smooth factors for optimization. Several experiments are conducted to demonstrate the improvement of our system compared to the existing object SLAM systems.
引用
收藏
页码:4487 / 4492
页数:6
相关论文
共 19 条
[1]   DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM [J].
Bescos, Berta ;
Campos, Carlos ;
Tardos, Juan D. ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) :5191-5198
[2]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[3]   BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation [J].
Chen, Hao ;
Sun, Kunyang ;
Tian, Zhi ;
Shen, Chunhua ;
Huang, Yongming ;
Yan, Youliang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8570-8578
[4]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[5]  
Geiger A., 2012, C COMP VIS PATT REC
[6]   CFP-SLAM: A Real-time Visual SLAM Based on Coarse-to-Fine Probability in Dynamic Environments [J].
Hu, Xinggang ;
Zhang, Yunzhou ;
Cao, Zhenzhong ;
Ma, Rong ;
Wu, Yanmin ;
Deng, Zhiqiang ;
Sun, Wenkai .
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, :4399-4406
[7]   Occlusions, Motion and Depth Boundaries with a Generic Network for Disparity, Optical Flow or Scene Flow Estimation [J].
Ilg, Eddy ;
Saikia, Tonmoy ;
Keuper, Margret ;
Brox, Thomas .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :626-643
[8]   DP-SLAM: A visual SLAM with moving probability towards dynamic environments [J].
Li, Ao ;
Wang, Jikai ;
Xu, Meng ;
Chen, Zonghai .
INFORMATION SCIENCES, 2021, 556 :128-142
[9]   Stereo Vision-Based Semantic 3D Object and Ego-Motion Tracking for Autonomous Driving [J].
Li, Peiliang ;
Qin, Tong ;
Shen, Shaojie .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :664-679
[10]   RGB-D SLAM in Dynamic Environments Using Static Point Weighting [J].
Li, Shile ;
Lee, Dongheui .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (04) :2263-2270