Feature-based visual simultaneous localization and mapping: a survey

被引:35
作者
Azzam, Rana [1 ]
Taha, Tarek [2 ]
Huang, Shoudong [3 ]
Zweiri, Yahya [4 ]
机构
[1] Khalifa Univ Sci & Technol, Abu Dhabi, U Arab Emirates
[2] Algorythmas Autonomous Aerial Lab, Abu Dhabi, U Arab Emirates
[3] Univ Technol Sydney, Sydney, NSW, Australia
[4] Kingston Univ London, Fac Sci Engn & Comp, Kingston, ON, Canada
来源
SN APPLIED SCIENCES | 2020年 / 2卷 / 02期
关键词
Robotics; SLAM; Localization; Sensors; Factor graphs; Semantics; LOOP CLOSURE DETECTION; DYNAMIC ENVIRONMENTS; MONOCULAR OBJECT; SLAM; PERCEPTION; SENSOR; MODEL;
D O I
10.1007/s42452-020-2001-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Visual simultaneous localization and mapping (SLAM) has attracted high attention over the past few years. In this paper, a comprehensive survey of the state-of-the-art feature-based visual SLAM approaches is presented. The reviewed approaches are classified based on the visual features observed in the environment. Visual features can be seen at different levels; low-level features like points and edges, middle-level features like planes and blobs, and high-level features like semantically labeled objects. One of the most critical research gaps regarding visual SLAM approaches concluded from this study is the lack of generality. Some approaches exhibit a very high level of maturity, in terms of accuracy and efficiency. Yet, they are tailored to very specific environments, like feature-rich and static environments. When operating in different environments, such approaches experience severe degradation in performance. In addition, due to software and hardware limitations, guaranteeing a robust visual SLAM approach is extremely challenging. Although semantics have been heavily exploited in visual SLAM, understanding of the scene by incorporating relationships between features is not yet fully explored. A detailed discussion of such research challenges is provided throughout the paper.
引用
收藏
页数:24
相关论文
共 145 条
[1]  
Alahi A, 2012, PROC CVPR IEEE, P510, DOI 10.1109/CVPR.2012.6247715
[2]   Real-time visual loop-closure detection [J].
Angeli, Adrien ;
Filliat, David ;
Doncieux, Stephane ;
Meyer, Jean-Arcady .
2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, :1842-+
[3]  
Annaiyan A, 2017, INT CONF UNMAN AIRCR, P1118
[4]  
[Anonymous], P 22 INT C INF FUS F
[5]  
[Anonymous], ARXIV16120824 CORR
[6]  
[Anonymous], 2017, 2017 LATIN AM ROBOTI, DOI DOI 10.1109/SBR-LARS-R.2017.8215301
[7]  
[Anonymous], ARXIV160700470 CORR
[8]  
[Anonymous], ARXIV18040911 CORR
[9]  
[Anonymous], 2015, Intelligent Industrial Systems, DOI DOI 10.1007/S40903-015-0032-7
[10]  
[Anonymous], ARXIV180404011 CORR