How Challenging is a Challenge for SLAM? An Answer from Quantitative Visual Evaluation

被引:0
作者
Zhao, Xuhui [1 ]
Gao, Zhi [1 ]
Li, Hao [1 ]
Li, Chenyang [1 ]
Chen, Jingwei [1 ]
Yi, Han [2 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
来源
ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, BICS 2023 | 2024年 / 14374卷
基金
中国国家自然科学基金;
关键词
SLAM; Robotics; Visual Challenges; Quantitative Evaluation;
D O I
10.1007/978-981-97-1417-9_17
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
SLAM (Simultaneously Localization and Mapping) is the fundamental technology for the application of unmanned intelligent systems, such as underwater exploration with fish robots. But various visual challenges often occur in practical environments, severely threaten the system robustness. Currently, few research explicitly focus on visual challenges for SLAM and analyze them quantitatively, resulting in works with less comprehensiveness and generalization. Many are basically not intelligent enough in the changing real world and sometimes even infeasible for practical deployment due to the lack of accurate visual cognition in the ambient environment, as many animals do. Inspired by visual perception pathways in brains, we try to solve the problem from the view of visual cognition and propose a fully computational reliable evaluation method for general challenges to push the frontier of visual SLAM. It systematically decomposes various challenges into three relevant aspects and evaluates the perception quality with corresponding scores. Extensive experiments on different datasets demonstrate the feasibility and effectiveness of our method by a strong correlation with SLAM performance. Moreover, we automatically obtain detailed insights about challenges from quantitative evaluation, which is also important for targeted solutions. To our best knowledge, no similar works exist at present.
引用
收藏
页码:179 / 189
页数:11
相关论文
共 26 条
[11]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[12]  
Ji Zhang, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P5051, DOI 10.1109/ICRA.2017.7989589
[13]   End-to-End Blind Image Quality Assessment Using Deep Neural Networks [J].
Ma, Kede ;
Liu, Wentao ;
Zhang, Kai ;
Duanmu, Zhengfang ;
Wang, Zhou ;
Zuo, Wangmeng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) :1202-1213
[14]   Multiscale Superpixelwise Prophet Model for Noise-Robust Feature Extraction in Hyperspectral Images [J].
Ma, Ping ;
Ren, Jinchang ;
Sun, Genyun ;
Zhao, Huimin ;
Jia, Xiuping ;
Yan, Yijun ;
Zabalza, Jaime .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
[15]   Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality [J].
Moorthy, Anush Krishna ;
Bovik, Alan Conrad .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (12) :3350-3364
[16]   ORB-SLAM: A Versatile and Accurate Monocular SLAM System [J].
Mur-Artal, Raul ;
Montiel, J. M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (05) :1147-1163
[17]  
Pyojin Kim, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P5447, DOI 10.1109/ICRA.2017.7989640
[18]   Towards Resilient Autonomous Navigation of Drones [J].
Santamaria-Navarro, Angel ;
Thakker, Rohan ;
Fan, David D. ;
Morrell, Benjamin ;
Agha-mohammadi, Ali-akbar .
ROBOTICS RESEARCH: THE 19TH INTERNATIONAL SYMPOSIUM ISRR, 2022, 20 :922-937
[19]  
Sturm J, 2012, IEEE INT C INT ROBOT, P573, DOI 10.1109/IROS.2012.6385773
[20]  
Teed Z, 2021, ADV NEUR IN, V34