Multi-cue localization for soccer playing humanoid robots

被引:0
作者
Strasdat, Hauke [1 ]
Bennewitz, Maren [1 ]
Behnke, Sven [1 ]
机构
[1] Univ Freiburg, Inst Comp Sci, D-79110 Freiburg, Germany
来源
ROBOCUP 2006: ROBOT SOCCER WORLD CUP X | 2007年 / 4434卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An essential capability of a soccer playing robot is to robustly and accurately estimate its pose on the field. Tracking the pose of a humanoid robot is, however, a complex problem. The main difficulties are that the robot has only a constrained field of view, which is additionally often affected by occlusions, that the roll angle of the camera changes continously and can only be roughly estimated, and that dead reckoning provides only noisy estimates. In this paper, we present a technique that uses field lines, the center circle, corner poles, and goals extracted out of the images of a low-cost wide-angle camera as well as motion commands and a compass to localize a humanoid robot on the soccer field. We present a new approach to robustly extract lines using detectors for oriented line pints and the Hough transform. Since we first estimate the orientation, the individual line points are localized well in the Hough domain. In addition, while matching observed lines and model lines, we do not only consider their Hough parameters. Our similarity measure also takes into account the positions and lengths of the lines. In this way, we obtain a much more reliable estimate how well two lines fit. We apply Monte-Carlo localization to estimate the pose of the robot. The observation model used to evaluate the individual particles considers the differences of expected and measured distances and angles of the other landmarks. As we demonstrate in real-world experiments, our technique is able to robustly and accurately track the position of a humanoid robot on a soccer field. We also present experiments to evaluate the utility of using the different cues for pose estimation.
引用
收藏
页码:245 / +
页数:3
相关论文
共 50 条
[21]   Multi-cue Discriminative Place Recognition [J].
Xing, Li ;
Pronobis, Andrzej .
MULTILINGUAL INFORMATION ACCESS EVALUATION II: MULTIMEDIA EXPERIMENTS, PT II, 2010, 6242 :315-323
[22]   Multi-cue Augmented Face Clustering [J].
Zhou, Chengju ;
Zhang, Changqing ;
Fu, Huazhu ;
Wang, Rui ;
Cao, Xiaochun .
MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, :1095-1098
[23]   Fast Multi-Target Recognition Method for Humanoid Robot Playing Soccer [J].
Wu F. ;
Yang Z. ;
Zhang Y. ;
Wang H. ;
Liu S. ;
Yin J. ;
Jin X. ;
Wang C. ;
Gai Y. .
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2019, 31 (12) :2152-2165
[24]   Efficient Multi-cue Scene Segmentation [J].
Scharwaechter, Timo ;
Enzweiler, Markus ;
Franke, Uwe ;
Roth, Stefan .
PATTERN RECOGNITION, GCPR 2013, 2013, 8142 :435-445
[25]   Adaptive multi-cue kernel tracking [J].
Wang, Yongzhong ;
Liang, Yan ;
Zhao, Chunhui ;
Pan, Quan .
2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5, 2007, :1814-1817
[26]   On a novel localization methodology for humanoid soccer robots via sensor fusion and perspective transformation [J].
Nadiri, Farzad ;
Banirostam, Touraj ;
Rad, Ahmad B. .
INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2025,
[27]   Multi-Cue Learning and Visualization of Unusual Events [J].
Schuster, Rene ;
Schulter, Samuel ;
Poier, Georg ;
Hirzer, Martin ;
Birchbauer, Josef ;
Roth, Peter M. ;
Bischof, Horst ;
Winter, Martin ;
Schallauer, Peter .
2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
[28]   Multi-cue fusion for emotion recognition in the wild [J].
Yan, Jingwei ;
Zheng, Wenming ;
Cui, Zhen ;
Tang, Chuangao ;
Zhang, Tong ;
Zong, Yuan .
NEUROCOMPUTING, 2018, 309 :27-35
[29]   Multi-cue facial feature detection and tracking [J].
Chen, Jingying ;
Tiddeman, Bernard .
IMAGE AND SIGNAL PROCESSING, 2008, 5099 :356-367
[30]   Multi-Cue Cascades for Robust Visual Tracking [J].
Ding, Feifei ;
Li, Chan ;
Li, Tian ;
Yang, Wenyuan .
IEEE ACCESS, 2019, 7 :125079-125090