How Challenging is a Challenge? CEMS: a Challenge Evaluation Module for SLAM Visual Perception

被引:4
作者
Zhao, Xuhui [1 ]
Gao, Zhi [1 ]
Li, Hao [1 ]
Ji, Hong [1 ]
Yang, Hong [2 ]
Li, Chenyang [1 ]
Fang, Hao [3 ]
Chen, Ben M. [4 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Hubei, Peoples R China
[2] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
[3] Beijing Inst Technol, Sch Automat, Beijing 100081, Peoples R China
[4] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
Robotics; Resilient SLAM; Visual degradation; Challenge evaluation; QUALITY ASSESSMENT;
D O I
10.1007/s10846-024-02077-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite promising SLAM research in both vision and robotics communities, which fundamentally sustains the autonomy of intelligent unmanned systems, visual challenges still threaten its robust operation severely. Existing SLAM methods usually focus on specific challenges and solve the problem with sophisticated enhancement or multi-modal fusion. However, they are basically limited to particular scenes with a non-quantitative understanding and awareness of challenges, resulting in a significant performance decline with poor generalization and(or) redundant computation with inflexible mechanisms. To push the frontier of visual SLAM, we propose a fully computational reliable evaluation module called CEMS (Challenge Evaluation Module for SLAM) for general visual perception based on a clear definition and systematic analysis. It decomposes various challenges into several common aspects and evaluates degradation with corresponding indicators. Extensive experiments demonstrate our feasibility and outperformance. The proposed module has a high consistency of 88.298% compared with annotation ground truth, and a strong correlation of 0.879 compared with SLAM tracking performance. Moreover, we show the prototype SLAM based on CEMS with better performance and the first comprehensive CET (Challenge Evaluation Table) for common SLAM datasets (EuRoC, KITTI, etc.) with objective and fair evaluations of various challenges. We make it available online to benefit the community on our website.
引用
收藏
页数:19
相关论文
共 73 条
[1]  
Agha A, 2021, Arxiv, DOI arXiv:2103.11470
[2]  
[Anonymous], 2023, HoYoverse: Genshin Impact-Step Into a Vast Magical World of Advanture
[3]  
[Anonymous], 2013, IMAGE VIDEO TECHNOLO
[4]  
Benesty J, 2009, SPRINGER TOP SIGN PR, V2, P1, DOI 10.1007/978-3-642-00296-0
[5]  
Brunner C., 2014, Experimental Robotics, P711, DOI DOI 10.1007/978-3-642-28572-149
[6]  
Brunner C., 2010, P 10 PERF METR INT S, P1
[7]  
Brunner C, 2011, IEEE INT C INT ROBOT
[8]   Selective Combination of Visual and Thermal Imaging for Resilient Localization in Adverse Conditions: Day and Night, Smoke and Fire [J].
Brunner, Christopher ;
Peynot, Thierry ;
Vidal-Calleja, Teresa ;
Underwood, James .
JOURNAL OF FIELD ROBOTICS, 2013, 30 (04) :641-666
[9]  
BT I., 2020, document recommendation itu-r bt. 500-14 (10/2019)
[10]   Robust SLAM Systems: Are We There Yet? [J].
Bujanca, Mihai ;
Shi, Xuesong ;
Spear, Matthew ;
Zhao, Pengpeng ;
Lennox, Barry ;
Lujan, Mikel .
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, :5320-5327