DroNet: Learning to Fly by Driving

被引:315
作者
Loquercio, Antonio [1 ,2 ,3 ]
Maqueda, Ana I. [4 ,5 ]
del-Blanco, Carlos R. [4 ,5 ]
Scaramuzza, Davide [1 ,2 ,3 ]
机构
[1] Univ Zurich, Robot & Percept Grp, Dept Informat, CH-8092 Zurich, Switzerland
[2] Univ Zurich, Dept Neuroinformat, CH-8092 Zurich, Switzerland
[3] ETH, CH-8092 Zurich, Switzerland
[4] Univ Politecn Madrid, Grp Tratamiento Imagenes, Informat Proc & Telecommun Ctr, E-28040 Madrid, Spain
[5] Univ Politecn Madrid, ETSI Telecomunicac, E-28040 Madrid, Spain
基金
瑞士国家科学基金会;
关键词
Learning from demonstration; deep learning in robotics and automation; aerial systems: perception and autonomy;
D O I
10.1109/LRA.2018.2795643
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Civilian drones are soon expected to be used in a wide variety of tasks, such as aerial surveillance, delivery, or monitoring of existing architectures. Nevertheless, their deployment in urban environments has so far been limited. Indeed, in unstructured and highly dynamic scenarios, drones face numerous challenges to navigate autonomously in a feasible and safe way. In contrast to traditional "map-localize-plan" methods, this letter explores a data-driven approach to cope with the above challenges. To accomplish this, we propose DroNet: a convolutional neural network that can safely drive a drone through the streets of a city. Designed as a fast eight-layers residual network, DroNet produces two outputs for each single input image: Asteering angle to keep the drone navigating while avoiding obstacles, and a collision probability to let the UAV recognize dangerous situations and promptly react to them. The challenge is however to collect enough data in an unstructured outdoor environment such as a city. Clearly, having an expert pilot providing training trajectories is not an option given the large amount of data required and, above all, the risk that it involves for other vehicles or pedestrians moving in the streets. Therefore, we propose to train a UAV from data collected by cars and bicycles, which, already integrated into the urban environment, would not endanger other vehicles and pedestrians. Although trained on city streets from the viewpoint of urban vehicles, the navigation policy learned by DroNet is highly generalizable. Indeed, it allows a UAV to successfully fly at relative high altitudes and even in indoor environments, such as parking lots and corridors. To share our findings with the robotics community, we publicly release all our datasets, code, and trained networks.
引用
收藏
页码:1088 / 1095
页数:8
相关论文
共 27 条
[1]  
[Anonymous], 2017, P C ROB LEARN
[2]  
[Anonymous], P IEEE RSJ INT C INT
[3]  
Bengio Y., 2009, ICML, P41, DOI DOI 10.1145/1553374.1553380
[4]   Autonomous, Vision-based Flight and Live Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle [J].
Faessler, Matthias ;
Fontana, Flavio ;
Forster, Christian ;
Mueggler, Elias ;
Pizzoli, Matia ;
Scaramuzza, Davide .
JOURNAL OF FIELD ROBOTICS, 2016, 33 (04) :431-450
[5]   A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots [J].
Giusti, Alessandro ;
Guzzi, Jerome ;
Ciresan, Dan C. ;
He, Fang-Lin ;
Rodriguez, Juan P. ;
Fontana, Flavio ;
Faessler, Matthias ;
Forster, Christian ;
Schmidhuber, Jurgen ;
Di Caro, Gianni ;
Scaramuzza, Davide ;
Gambardella, Luca M. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (02) :661-667
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]
[8]  
Kahn G., 2017, P IEEE INT C ROB AUT
[9]   Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention [J].
Kim, Jinkyu ;
Canny, John .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2961-2969
[10]  
Kingma D. P., P 3 INT C LEARN REPR