How Challenging is a Challenge for SLAM? An Answer from Quantitative Visual Evaluation

被引:0
作者
Zhao, Xuhui [1 ]
Gao, Zhi [1 ]
Li, Hao [1 ]
Li, Chenyang [1 ]
Chen, Jingwei [1 ]
Yi, Han [2 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
来源
ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, BICS 2023 | 2024年 / 14374卷
基金
中国国家自然科学基金;
关键词
SLAM; Robotics; Visual Challenges; Quantitative Evaluation;
D O I
10.1007/978-981-97-1417-9_17
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
SLAM (Simultaneously Localization and Mapping) is the fundamental technology for the application of unmanned intelligent systems, such as underwater exploration with fish robots. But various visual challenges often occur in practical environments, severely threaten the system robustness. Currently, few research explicitly focus on visual challenges for SLAM and analyze them quantitatively, resulting in works with less comprehensiveness and generalization. Many are basically not intelligent enough in the changing real world and sometimes even infeasible for practical deployment due to the lack of accurate visual cognition in the ambient environment, as many animals do. Inspired by visual perception pathways in brains, we try to solve the problem from the view of visual cognition and propose a fully computational reliable evaluation method for general challenges to push the frontier of visual SLAM. It systematically decomposes various challenges into three relevant aspects and evaluates the perception quality with corresponding scores. Extensive experiments on different datasets demonstrate the feasibility and effectiveness of our method by a strong correlation with SLAM performance. Moreover, we automatically obtain detailed insights about challenges from quantitative evaluation, which is also important for targeted solutions. To our best knowledge, no similar works exist at present.
引用
收藏
页码:179 / 189
页数:11
相关论文
共 26 条
[1]  
Benesty J, 2009, SPRINGER TOP SIGN PR, V2, P1, DOI 10.1007/978-3-642-00296-0
[2]  
Brunner C., 2014, Experimental Robotics, P711, DOI DOI 10.1007/978-3-642-28572-149
[3]  
Brunner C., 2010, P 10 PERF METR INT S, P1
[4]   The EuRoC micro aerial vehicle datasets [J].
Burri, Michael ;
Nikolic, Janosch ;
Gohl, Pascal ;
Schneider, Thomas ;
Rehder, Joern ;
Omari, Sammy ;
Achtelik, Markus W. ;
Siegwart, Roland .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (10) :1157-1163
[5]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[6]  
Carrillo H, 2012, IEEE INT CONF ROBOT, P2080, DOI 10.1109/ICRA.2012.6224890
[7]  
Cepeda-Negrete J, 2014, LECT NOTES COMPUT SC, V8333, P493, DOI 10.1007/978-3-642-53842-1_42
[8]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[9]   AQUALOC: An underwater dataset for visual-inertial-pressure localization [J].
Ferrera, Maxime ;
Creuze, Vincent ;
Moras, Julien ;
Trouve-Peloux, Pauline .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2019, 38 (14) :1549-1559
[10]  
Gadkari Dhanashree., 2004, Image Quality Analysis Using GLCM