A review of visual SLAM with dynamic objects

被引:9
作者
Qin, Yong [1 ]
Yu, Haidong [1 ]
机构
[1] Guangxi Univ Sci & Technol, Coll Automat, Liuzhou, Peoples R China
来源
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION | 2023年 / 50卷 / 06期
关键词
Visual SLAM; Dynamic object; Human posture; Visual sensor; Dynamic environment; VERSATILE; ROBUST; LINE;
D O I
10.1108/IR-07-2023-0162
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Purpose - This paper aims to provide a better understanding of the challenges and potential solutions in Visual Simultaneous Localization and Mapping (SLAM), laying the foundation for its applications in autonomous navigation, intelligent driving and other related domains. Design/methodology/approach - In analyzing the latest research, the review presents representative achievements, including methods to enhance efficiency, robustness and accuracy. Additionally, the review provides insights into the future development direction of Visual SLAM, emphasizing the importance of improving system robustness when dealing with dynamic environments. The research methodology of this review involves a literature review and data set analysis, enabling a comprehensive understanding of the current status and prospects in the field of Visual SLAM. Findings - This review aims to comprehensively evaluate the latest advances and challenges in the field of Visual SLAM. By collecting and analyzing relevant research papers and classic data sets, it reveals the current issues faced by Visual SLAM in complex environments and proposes potential solutions. The review begins by introducing the fundamental principles and application areas of Visual SLAM, followed by an in-depth discussion of the challenges encountered when dealing with dynamic objects and complex environments. To enhance the performance of SLAM algorithms, researchers have made progress by integrating different sensor modalities, improving feature extraction and incorporating deep learning techniques, driving advancements in the field. Originality/value - To the best of the authors' knowledge, the originality of this review lies in its in-depth analysis of current research hotspots and predictions for future development, providing valuable references for researchers in this field.
引用
收藏
页码:1000 / 1010
页数:11
相关论文
共 71 条
[41]   Dense Depth Priors for Neural Radiance Fields from Sparse Input Views [J].
Roessle, Barbara ;
Barron, Jonathan T. ;
Mildenhall, Ben ;
Srinivasan, Pratul P. ;
Niebner, Matthias .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :12882-12891
[42]  
Rublee E, 2011, IEEE I CONF COMP VIS, P2564, DOI 10.1109/ICCV.2011.6126544
[43]   SLAM plus plus : Simultaneous Localisation and Mapping at the Level of Objects [J].
Salas-Moreno, Renato F. ;
Newcombe, Richard A. ;
Strasdat, Hauke ;
Kelly, Paul H. J. ;
Davison, Andrew J. .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :1352-1359
[44]   Visual SLAM and Structure from Motion in Dynamic Environments: A Survey [J].
Saputra, Muhamad Risqi U. ;
Markham, Andrew ;
Trigoni, Niki .
ACM COMPUTING SURVEYS, 2018, 51 (02)
[45]   Panoptic Multi-TSDFs: a Flexible Representation for Online Multi-resolution Volumetric Mapping and Long-term Dynamic Scene Consistency [J].
Schmid, Lukas ;
Delmerico, Jeffrey ;
Schonberger, Johannes L. ;
Nieto, Juan ;
Pollefeys, Marc ;
Siegwart, Roland ;
Cadena, Cesar .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, :8018-8024
[46]  
Shao WZ, 2019, IEEE INT C INT ROBOT, P370, DOI [10.1109/iros40897.2019.8968012, 10.1109/IROS40897.2019.8968012]
[47]  
Shi XS, 2020, IEEE INT CONF ROBOT, P3139, DOI [10.1109/icra40945.2020.9196638, 10.1109/ICRA40945.2020.9196638]
[48]   Self-Supervised Depth and Ego-Motion Estimation for Monocular Thermal Video Using Multi-Spectral Consistency Loss [J].
Shin, Ukcheol ;
Lee, Kyunghyun ;
Lee, Seokju ;
Kweon, In So .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) :1103-1110
[49]   Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras [J].
Shu, Fangwen ;
Wang, Jiaxuan ;
Pagani, Alain ;
Stricker, Didier .
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, :2105-2112
[50]   360-DFPE: Leveraging Monocular 360-Layouts for Direct Floor Plan Estimation [J].
Solarte, Bolivar ;
Liu, Yueh-Cheng ;
Wu, Chin-Hsuan ;
Tsai, Yi-Hsuan ;
Sun, Min .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) :6503-6510