Collision avoidance for a small drone with a monocular camera using deep reinforcement learning in an indoor environment

被引:0
作者
Kim M. [1 ]
Kim J. [1 ]
Jung M. [1 ]
Oh H. [1 ]
机构
[1] School of Mechanical, Aerospace and Nuclear Engineering, Ulsan National Institute of Science and Technology (UNIST)
关键词
Collision avoidance; D3QN; Deep reinforcement learning; Depth estimation; Monocular camera;
D O I
10.5302/J.ICROS.2020.20.0014
中图分类号
学科分类号
摘要
Collision avoidance of drones in a complex environment, especially in an indoor environment, is a challenging task. This paper develops an obstacle avoidance system for small multi-rotor drones based on a deep reinforcement learning algorithm using only a monocular camera. The proposed method comprises two steps: depth estimation and navigation decision-making. For the depth estimation step, a pre-trained depth estimation algorithm based on a CNN (Convolutional Neural Network) is used. In the navigation decision-making step, a dueling double deep Q-network is employed. The entire training procedure is performed in a Gazebo simulation environment using a robot operating system. To validate the robustness of the proposed approach, various simulations and experiments are conducted using a Parrot Bebop2 drone in an indoor corridor. We demonstrate that the proposed algorithm successfully navigates through a narrow corridor comprising a texture-free wall, people, and boxes. A supplementary video clip of the experiments can be found at https://youtu.be/ oSQHCsvuE-8. © ICROS 2020.
引用
收藏
页码:399 / 411
页数:12
相关论文
共 32 条
[1]  
Paz H.A.L.M., Sturm J., Cremers D., Collision avoidance for quadrotors with a monocular cam era, Experimental Robotics, pp. 195-209, (2016)
[2]  
Park J.M., Lee J.W., Nighttime vehicle detection on roads by combining image processing and CNN, Journal of Institute of Control, Robotics and Systems (In Korean), 25, 12, pp. 1085-1092, (2019)
[3]  
Green W.E., Oh P.Y., Optic-flow-based collision avoidance, IEEE Robotics & Automation Magazine, 15, 1, pp. 96-136, (2018)
[4]  
Cho G.I., Kim J., Oh H., Vision-based obstacle avoidance strategies for MAVs using optical flows in 3-D textured environments, Sensors, 19, 11, (2019)
[5]  
Artal R.M., Montiel J.M.M., Tardos J.D., ORB-SLAM: A versatile and accurate monocular SLAM system, IEEE Transactions on Robotics, 31, 5, pp. 1147-1163, (2015)
[6]  
Aulinas J., Petillot Y., Salvi J., Llado X., The SLAM Problem: A Survey, International Conference of the Catalan Association for Artificial Intelligence, 184, 1, pp. 363-391, (2008)
[7]  
Geiger A., Lenz P., Stiller C., Urtasun R., Vision meets robotics: The kitti dataset, The International Journal of Robotics Research, 32, 11, pp. 1231-1237, (2013)
[8]  
Opromolla R., Gasano G., Accardo D., Perspectives and sensing concepts for small UAS sense and avoid, IEEE/AIAA 37Th Digital Avionics Systems Conference, (2018)
[9]  
Charkravarty P., Kelchtermans K., Roussel T., Wellens S., Tuytelaars T., Eychen L.V., CNN-based single image obstacle avoidance on a quadrotor, IEEE, International Conference on Robotics and Automation, pp. 6369-6374, (2017)
[10]  
Yang S., Konam S., Ma C., Rosenthal S., Veloso M., Scherer S., Obstacle Avoidance through Deep Networks Based Interediate Perception, (2017)