Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal

被引:41
作者
Cheng, Jiyu [1 ]
Zhang, Hong [2 ]
Meng, Max Q. -H. [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Elect Engn, RPAI Lab, Hong Kong, Peoples R China
[2] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2R3, Canada
关键词
Visualization; Simultaneous localization and mapping; Feature extraction; Bayes methods; Cameras; Bayesian update; dynamic environment; object detection; visual localization; RGB-D SLAM; MOTION REMOVAL; MONOCULAR SLAM; ODOMETRY; TRACKING;
D O I
10.1109/TASE.2020.2964938
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual localization is a fundamental capability in robotics and has been well studied for recent decades. Although many state-of-the-art algorithms have been proposed, great success usually builds on the assumption that the working environment is static. In most of the real scenes, the assumption cannot hold because there are inevitably moving objects, especially humans, which significantly degrade the localization accuracy. To address this problem, we propose a robust visual localization system building on top of a feature-based visual simultaneous localization and mapping algorithm. We design a dynamic region detection method and use it to preprocess the input frame. The detection process is achieved in a Bayesian framework which considers both the prior knowledge generated from an object detection process and observation information. After getting the detection result, feature points extracted from only the static regions will be used for further visual localization. We performed the experiments on the public TUM data set and our recorded data set, which shows the daily dynamic scenarios. Both qualitative and quantitative results are provided to show the feasibility and effectiveness of the proposed method. Note to Practitioners-This article was motivated by the visual localization problem of mobile robots in dynamic working environments. Visual localization is an important and fundamental capability in robotics. When a mobile robot is in an unknown environment, one important thing is to localize itself using the data collected from the perception sensors. In real scenes, the current localization algorithms often suffer from moving objects. The existence of moving objects, especially humans, brings a lot of noise into the localization process, which poses a big challenge and makes the robotic implementation unstable. In this article, a novel strategy is designed to handle this problem. We leverage both the object detection result and observation information in a Bayesian framework for dynamic region detection. Once the dynamic regions are determined, we discard them and feed the other regions in a state-of-the-art visual simultaneous localization and mapping system for further visual localization. The proposed strategy greatly improves the localization accuracy in dynamic working environments and guarantees the robustness for robotic implementation.
引用
收藏
页码:1585 / 1596
页数:12
相关论文
共 46 条
[1]   DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J].
Bescos, Berta ;
Facil, Jose M. ;
Civera, Javier ;
Neira, Jose .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :4076-4083
[2]  
Cheng J., IEEE T AUTOM SCI ENG
[3]   Robust Semantic Mapping in Challenging Environments [J].
Cheng, Jiyu ;
Sun, Yuxiang ;
Meng, Max Q-H .
ROBOTICA, 2020, 38 (02) :256-270
[4]   Improving monocular visual SLAM in dynamic environments: an optical-flow-based approach [J].
Cheng, Jiyu ;
Sun, Yuxiang ;
Meng, Max Q-H .
ADVANCED ROBOTICS, 2019, 33 (12) :576-589
[5]  
Cheng JY, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), P1981, DOI 10.1109/ROBIO.2018.8665075
[6]  
Cheng JY, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), P723, DOI 10.1109/ROBIO.2018.8664893
[7]   RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY [J].
FISCHLER, MA ;
BOLLES, RC .
COMMUNICATIONS OF THE ACM, 1981, 24 (06) :381-395
[8]   SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems [J].
Forster, Christian ;
Zhang, Zichao ;
Gassner, Michael ;
Werlberger, Manuel ;
Scaramuzza, Davide .
IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (02) :249-265
[9]   Tracking and localization for omni-directional mobile industrial robot using reflectors [J].
Guo, Shuai ;
Fang, Ting-Ting ;
Song, Tao ;
Xi, Feng-Feng ;
Wei, Bang-Guo .
ADVANCES IN MANUFACTURING, 2018, 6 (01) :118-125
[10]   Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera [J].
Huang, Albert S. ;
Bachrach, Abraham ;
Henry, Peter ;
Krainin, Michael ;
Maturana, Daniel ;
Fox, Dieter ;
Roy, Nicholas .
ROBOTICS RESEARCH, ISRR, 2017, 100