Estimation of Robot Position and Orientation Using a Stationary Fisheye Camera

被引:7
作者
Delibasis, Konstantinos K. [1 ]
Plagianakos, Vassilis P. [1 ]
Maglogiannis, Ilias [2 ]
机构
[1] Univ Thessaly, Dept Comp Sci & Biomed Informat, Lamia, Greece
[2] Univ Piraeus, Dept Digital Syst, Piraeus, Greece
关键词
Computer vision; indoor robot localization; stationary fisheye camera; MONOCULAR VISION; LOCALIZATION; MODEL;
D O I
10.1142/S0218213015600040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A core problem in robotics is the determination of the location and pose of a mobile robot in its environment. The localization is a basic operation, which must be successfully carried out in complex environments using imprecise and/or contaminated data and is essential for a broad range of mobile robot tasks, since the robot behavior depends on its position. In this work, we propose the use of a stationary fisheye camera for real time robot localization in indoor environments. We employ a model for the formation of the image by the fisheye camera, which can be used for accelerating the segmentation of the robot's top surface, as well as for calculating the robot's true position in the real world frame of reference. The proposed algorithm for robot localization exploits the calibrated fisheye camera model and the known dimensions of the robot, whereas it does not depend on any information from the robot's sensors and does not require visual landmarks in the indoor environment. Furthermore, the pose (orientation) of the robot is determined using a triangular shape placed on top of the robot's flat top surface, using Hu's moment invariants, appropriately modified using the calibrated fisheye camera model. Initial results are presented from video sequences and are compared to the ground truth position, obtained by the robot's sensors. The dependence of the average positional error with the distance from the camera is also measured.
引用
收藏
页数:16
相关论文
共 17 条
[1]   A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences [J].
Aider, OA ;
Hoppenot, P ;
Colle, E .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2005, 52 (2-3) :229-246
[2]  
Courbon J., 2007, P 2007 IEEE RSJ INT
[3]   Scale Invariant Feature Transform on the Sphere: Theory and Applications [J].
Cruz-Mota, Javier ;
Bogdanova, Iva ;
Paquier, Benoit ;
Bierlaire, Michel ;
Thiran, Jean-Philippe .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2012, 98 (02) :217-241
[4]   Refinement of human silhouette segmentation in omni-directional indoor videos [J].
Delibasis, K. K. ;
Plagianakos, V. P. ;
Maglogiannis, I. .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2014, 128 :65-83
[5]  
Delibasis K. K., 2013, P 6 INT C PERV TECHN
[6]  
Delibasis KK, 2013, IFIP ADV INF COMM TE, V412, P245
[7]   Central catadioptric image processing with geodesic metric [J].
Demonceaux, Cedric ;
Vasseur, Pascal ;
Fougerolle, Yohan .
IMAGE AND VISION COMPUTING, 2011, 29 (12) :840-849
[8]   Mobile robot localization in indoor environment [J].
Dulimarta, HS ;
Jain, AK .
PATTERN RECOGNITION, 1997, 30 (01) :99-111
[9]  
Flusser Jan, 2009, Moments and moment invariants in pattern recognition, P3, DOI DOI 10.1002/9780470684757
[10]  
KRUSE E, 1998, P IEEE RSJ INT C INT