Instantaneous robot self-localization and motion estimation with omnidirectional vision

被引:15
作者
Spacek, Libor [1 ]
Burbridge, Christopher [1 ]
机构
[1] Univ Essex, Dept Comp Sci, Colchester CO4 3SQ, Essex, England
关键词
self-localization; motion estimation; omnidirectional vision; omnistereo; omniflow;
D O I
10.1016/j.robot.2007.05.009
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents two related methods for autonomous visual guidance of robots: localization by trilateration, and interframe motion estimation. Both methods use coaxial omnidirectional stereopsis (omnistereo), which returns the range r to objects or guiding points detected in the images. The trilateration method achieves self-localization using r from the three nearest objects at known positions. The interframe motion estimation is more general, being able to use any features in an unknown environment. The guiding points are detected automatically on the basis of their perceptual significance and thus they need not have either special markings or be placed at known locations. The interframe motion estimation does not require previous motion history, making it well suited for detecting acceleration (in 20th of a second) and thus supporting dynamic models of robot's motion which will gain in importance when autonomous robots achieve useful speeds. An initial estimate of the robot's rotation omega (the visual compass) is obtained from the angular optic flow in an onmidirectional image. A new noniterative optic flow method has been developed for this purpose. Adding omega to all observed (robot relative) bearings theta gives true bearings towards objects (relative to a fixed coordinate frame). The rotation omega and the r, theta coordinates obtained at two frames for a single fixed point at unknown location are sufficient to estimate the translation of the robot. However, a large number of guiding points are typically detected and matched in most real images. Each such point provides a solution for the robot's translation. The solutions are combined by a robust clustering algorithm Clumat that reduces rotation and translation errors. Simulator experiments are included for all the presented methods. Real images obtained from ScitosG5 autonomously moving robot were used to test the interframe rotation and to show that the presented vision methods are applicable to real images in real robotics scenarios. (c) 2067 Elsevier B.V. All rights reserved.
引用
收藏
页码:667 / 674
页数:8
相关论文
共 13 条
[1]  
BINGIND D, 2006, 2006 AUTONOMOUS ROBO, P19
[2]  
BURBRIDGE C, 2006, 2006 AUTONOMOUS ROBO, P37
[3]  
GEYER G, 2002, IEEE T PATTERN ANAL, V24, P1
[4]   DETERMINING OPTICAL-FLOW [J].
HORN, BKP ;
SCHUNCK, BG .
ARTIFICIAL INTELLIGENCE, 1981, 17 (1-3) :185-203
[5]   OMNIDIRECTIONAL STEREO [J].
ISHIGURO, H ;
YAMAMOTO, M ;
TSUJI, S .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :257-262
[6]   3-D scene data recovery using omnidirectional multibaseline stereo [J].
Kang, SB ;
Szeliski, R .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1997, 25 (02) :167-183
[7]   Structure from motion with wide circular field of view cameras [J].
Micusík, B ;
Pajdla, T .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (07) :1135-1149
[8]  
RUSHANT K, 1998, P INT S INT ROB SYST, P275
[9]   Mobile robot navigation and scene modeling using stereo fish-eye lens system [J].
Shah, S ;
Aggarwal, JK .
MACHINE VISION AND APPLICATIONS, 1997, 10 (04) :159-173
[10]   A catadioptric sensor with multiple viewpoints [J].
Spacek, L .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2005, 51 (01) :3-15