Uncertainty-aware visually-attentive navigation using deep neural networks

被引:5
作者
Nguyen, Huan [1 ]
Andersen, Rasmus [2 ]
Boukas, Evangelos [2 ]
Alexis, Kostas [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Engn Cybernet, Autonomous Robots Lab, Hogskoleringen 1, N-7034 Trondheim, Norway
[2] Tech Univ Denmark, Dept Elect & Photon Engn, Lyngby, Denmark
关键词
Autonomous navigation; deep neural networks; uncertainty-aware navigation; visually-attentive navigation; aerial robots; SIMULTANEOUS LOCALIZATION; LARGE-SCALE; AUTONOMOUS EXPLORATION; MOTION; REPRESENTATION; FRAMEWORK; SEARCH; ROBUST;
D O I
10.1177/02783649231218720
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Autonomous navigation and information gathering in challenging environments are demanding since the robot's sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot's position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot's partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles), and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot's partial state and the neural network's epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best-action sequence. Simulation studies demonstrate the method's robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.
引用
收藏
页码:840 / 872
页数:33
相关论文
共 108 条
[11]   Rectangular Pyramid Partitioning Using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation [J].
Bucki, Nathan ;
Lee, Junseok ;
Mueller, Mark W. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (03) :4626-4633
[12]   Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age [J].
Cadena, Cesar ;
Carlone, Luca ;
Carrillo, Henry ;
Latif, Yasir ;
Scaramuzza, Davide ;
Neira, Jose ;
Reid, Ian ;
Leonard, John J. .
IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (06) :1309-1332
[13]   Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs [J].
Chen, Fanfei ;
Martin, John D. ;
Huang, Yewei ;
Wang, Jinkun ;
Englot, Brendan .
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, :6140-6147
[14]  
Choudhury Sanjiban, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P908, DOI 10.1109/ICRA.2017.7989112
[15]  
Chua K, 2018, ADV NEUR IN, V31
[16]   Graph-based subterranean exploration path planning using aerial and legged robots [J].
Dang, Tung ;
Tranzatto, Marco ;
Khattak, Shehryar ;
Mascarich, Frank ;
Alexis, Kostas ;
Hutter, Marco .
JOURNAL OF FIELD ROBOTICS, 2020, 37 (08) :1363-1388
[17]  
Dang T, 2018, IEEE INT CONF ROBOT, P2526, DOI 10.1109/ICRA.2018.8460992
[18]  
De Petris P, 2020, IEEE INT SYMP SAFE, P84, DOI [10.1109/ssrr50563.2020.9292583, 10.1109/SSRR50563.2020.9292583]
[19]  
De Wagter C, 2021, Arxiv, DOI arXiv:2109.14985
[20]   NavRep: Unsupervised Representations for Reinforcement Learning of Robot Navigation in Dynamic Human Environments [J].
Dugas, Daniel ;
Nieto, Juan ;
Siegwart, Roland ;
Chung, Jen Jen .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :7829-7835