Towards High-Speed Localisation for Autonomous Drone Racing

被引:16
作者
Arturo Cocoma-Ortega, Jose [1 ]
Martinez-Carranza, Jose [1 ,2 ]
机构
[1] Inst Nacl Astrofis Opt & Electr, Puebla, Mexico
[2] Univ Bristol, Bristol, Avon, England
来源
ADVANCES IN SOFT COMPUTING, MICAI 2019 | 2019年 / 11835卷
关键词
Autonomous Drone Racing; High-speed localisation; Convolutional neural network;
D O I
10.1007/978-3-030-33749-0_59
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to know the pose of a drone in a race track is a challenging task in Autonomous Drone Racing. However, to estimate the pose in real-time and at high-speed could be fundamental to lead an agile flight aiming to beat a human in a drone race. In this work, we present the architecture of a CNN to automatically estimates the drone's pose relative to a gate in a race track. Due to the challenge in ADR, various proposals have been developed to address the problem of autonomous navigation, including those works where a global localisation approach has been used. Despite there are well-known solutions for global localisation such as visual odometry or visual SLAM, these methods may become expensive to be computed onboard. Motivated by the latter, we propose a CNN architecture based on the Posenet network, a work-oriented to perform camera relocalisation in real-time. Our contribution relies on the fact that we have modified and re-trained the Posenet network to adapt it to the context of relative localisation w.r.t. a gate in the track. The ultimate goal is to use our proposed localisation approach to tackle the autonomous navigation problem. We report a maximum speed of up to 100 fps in a low budget computer. Furthermore, seeking to test our approach in realistic scenarios, we have carried out experiments with small gates of 1 m of diameter under different light conditions.
引用
收藏
页码:740 / 751
页数:12
相关论文
共 25 条
[1]  
Bang J, 2017, 2017 INTERNATIONAL CONFERENCE ON PLATFORM TECHNOLOGY AND SERVICE (PLATCON), P234
[2]   Hybrid Camera Pose Estimation [J].
Camposeco, Federico ;
Cohen, Andrea ;
Pollefeys, Marc ;
Sattler, Torsten .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :136-144
[3]  
Casarrubias-Vargas H., 2010, Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR 2010), P396, DOI 10.1109/ICPR.2010.105
[4]   LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation [J].
Costante, Gabriele ;
Ciarfuglia, Thomas Alessandro .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (03) :1735-1742
[5]  
DeTone D, 2017, ABS170707410 CORR
[6]  
Do T.-T., 2018, Deep-6dpose: Recovering 6d object pose
[7]   Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment? [J].
Fanani, Nolang ;
Stuerck, Alina ;
Ochs, Matthias ;
Bradler, Henry ;
Mester, Rudolf .
IMAGE AND VISION COMPUTING, 2017, 68 :3-13
[8]  
Graves A., 2017, KENNESAW J UNDERGRAD, V5, P5
[9]  
Kaufmann E, 2018, ABS180608548 CORR
[10]  
Kaufmann E, 2019, IEEE INT CONF ROBOT, P690, DOI [10.1109/icra.2019.8793631, 10.1109/ICRA.2019.8793631]