Metric Localisation for the NAO Robot

被引:0
作者
Alquisiris-Quecha, Oswualdo [1 ]
Martinez-Carranza, Jose [2 ]
机构
[1] Inst Nacl Astrofis Opt & Electr, Dept Comp Sci, Puebla, Mexico
[2] Univ Bristol, Dept Comp Sci, Bristol, England
来源
PATTERN RECOGNITION (MCPR 2021) | 2021年 / 12725卷
关键词
Depth estimation; Deep learning; CNN; SLAM; Optical flow; NAO robot; NAVIGATION; MOTION;
D O I
10.1007/978-3-030-77004-4_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a metric localisation approach for the NAO robot based on the methodology of depth estimation using optical flow in a frame-to-frame basis. We propose to convert optical flow into a 2-channel image from which images patches of 60x60 are extracted. Each patch is passed as input to a Convolutional Neural Network (CNN) with a regressor in the last layer, thus a depth value is estimated for such patch. A depth image is formed by putting together all the depth estimates obtained for each patch. The depth image is coupled with the RGB image and then passed to the well known ORB-SLAM system in its RGB-D version, this is, a visual simultaneous localisation and mapping approach that uses RGB and depth images to build a 3D map of the scene and use it to localise the camera. When using the depth images, the estimates are recovered with metric. Hence, the NAO's position can be estimated in metres. Our approach aims at exploiting the walking motion of the robot, which produces image displacements in consecutive frames, and by taking advantage from the fact that the NAO's walking motion could be programmed to be performed at constant speed. We mount a depth camera on the NAO's head to produce a training dataset that associates patch RGB images with depth values. Then, a CNN can be trained to learn the patterns in between optical flow vectors and the scene depth. For evaluation, we use one of the in-built NAO's camera. Our experiments show that this approach is feasible and could be exploited in applications where the NAO requires a localisation systems without depending on additional sensors or external localisation systems.
引用
收藏
页码:121 / 130
页数:10
相关论文
共 18 条
[1]  
Alquisiris-Quecha O., 2019, RES COMPUT SCI, V148, P49
[2]   Two-frame motion estimation based on polynomial expansion [J].
Farnebäck, G .
IMAGE ANALYSIS, PROCEEDINGS, 2003, 2749 :363-370
[3]   Learning an Efficient Gait Cycle of a Biped Robot Based on Reinforcement Learning and Artificial Neural Networks [J].
Gil, Cristyan R. ;
Calvo, Hiram ;
Sossa, Humberto .
APPLIED SCIENCES-BASEL, 2019, 9 (03)
[4]   Distance and velocity estimation using optical flow from a monocular camera [J].
Ho, Hann Woei ;
de Croon, Guido C. H. E. ;
Chu, Qiping .
INTERNATIONAL JOURNAL OF MICRO AIR VEHICLES, 2017, 9 (03) :198-208
[5]   Humanoid Robot Localization in Complex Indoor Environments [J].
Hornung, Armin ;
Wurm, Kai M. ;
Bennewitz, Maren .
IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, :1690-1695
[6]  
Li RH, 2018, IEEE INT CONF ROBOT, P7286, DOI 10.1109/ICRA.2018.8461251
[7]   Visual Navigation for Biped Humanoid Robots Using Deep Reinforcement Learning [J].
Lobos-Tsunekawa, Kenzo ;
Leiva, Francisco ;
Ruiz-del-Solar, Javier .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :3247-3254
[8]   ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras [J].
Mur-Artal, Raul ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (05) :1255-1262
[9]  
Ponce H., 2018, INT JOINT C NEUR NET, P1
[10]   Autonomous SLAM based humanoid navigation in a cluttered environment while transporting a heavy load [J].
Rioux, Antoine ;
Suleiman, Wael .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 99 :50-62