A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences

被引:31
作者
Aider, OA [1 ]
Hoppenot, P [1 ]
Colle, E [1 ]
机构
[1] Univ Evry, Lab Syst Complexes, F-91020 Evry, France
关键词
mobile robot localization; visual feature matching;
D O I
10.1016/j.robot.2005.03.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A model-based method for indoor mobile robot localization is presented herein; this method relies on monocular vision and uses straight-line correspondences. A classical four-step approach has been adopted (i.e. image acquisition, image feature extraction, image and model feature matching, and camera pose computing). These four steps will be discussed with special focus placed on the critical matching problem. An efficient and simple method for searching image and model feature correspondences, which has been designed for indoor mobile robot self-location, will be highlighted: this is a three-stage method based on the interpretation tree search approach. During the first stage, the correspondence space is reduced by virtue of splitting the navigable space into view-invariant regions. In making use of the specificity of the mobile robotics frame of reference, the global interpretation tree is divided into two sub-trees; two low-order geometric constraints are then defined and applied directly on 2D-3D correspondences in order to improve pruning and search efficiency. During the last stage, the pose is calculated for each matching hypothesis and the best one is selected according to a defined error function. Test results illustrate the performance of this approach. (C) 2005 Elsevier B.V. All rights reserved.
引用
收藏
页码:229 / 246
页数:18
相关论文
共 25 条