Uncertainty-aware visually-attentive navigation using deep neural networks

被引:5
作者
Nguyen, Huan [1 ]
Andersen, Rasmus [2 ]
Boukas, Evangelos [2 ]
Alexis, Kostas [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Engn Cybernet, Autonomous Robots Lab, Hogskoleringen 1, N-7034 Trondheim, Norway
[2] Tech Univ Denmark, Dept Elect & Photon Engn, Lyngby, Denmark
关键词
Autonomous navigation; deep neural networks; uncertainty-aware navigation; visually-attentive navigation; aerial robots; SIMULTANEOUS LOCALIZATION; LARGE-SCALE; AUTONOMOUS EXPLORATION; MOTION; REPRESENTATION; FRAMEWORK; SEARCH; ROBUST;
D O I
10.1177/02783649231218720
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Autonomous navigation and information gathering in challenging environments are demanding since the robot's sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot's position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot's partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles), and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot's partial state and the neural network's epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best-action sequence. Simulation studies demonstrate the method's robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.
引用
收藏
页码:840 / 872
页数:33
相关论文
共 108 条
[1]   A review of uncertainty quantification in deep learning: Techniques, applications and challenges [J].
Abdar, Moloud ;
Pourpanah, Farhad ;
Hussain, Sadiq ;
Rezazadegan, Dana ;
Liu, Li ;
Ghavamzadeh, Mohammad ;
Fieguth, Paul ;
Cao, Xiaochun ;
Khosravi, Abbas ;
Acharya, U. Rajendra ;
Makarenkov, Vladimir ;
Nahavandi, Saeid .
INFORMATION FUSION, 2021, 76 :243-297
[2]  
Abdelaziz AH, 2015, 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, P3561
[3]   Motion- and Uncertainty-aware Path Planning for Micro Aerial Vehicles [J].
Achtelik, Markus W. ;
Lynen, Simon ;
Weiss, Stephan ;
Chli, Margarita ;
Siegwart, Roland .
JOURNAL OF FIELD ROBOTICS, 2014, 31 (04) :676-698
[4]   SLAP: Simultaneous Localization and Planning Under Uncertainty via Dynamic Replanning in Belief Space [J].
Agha-mohammadi, Ali-akbar ;
Agarwal, Saurav ;
Kim, Sung-Kyun ;
Chakravorty, Suman ;
Amato, Nancy M. .
IEEE TRANSACTIONS ON ROBOTICS, 2018, 34 (05) :1195-1214
[5]  
Ahn MS, 2019, INT CONF UBIQ ROBOT, P707, DOI [10.1109/URAI.2019.8768489, 10.1109/urai.2019.8768489]
[6]  
Amini A., 2017, 31 INT C NEUR INF PR
[7]  
Astudillo RF, 2011, 12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, P468
[8]   Fast Multi-UAV Decentralized Exploration of Forests [J].
Bartolomei, Luca ;
Teixeira, Lucas ;
Chli, Margarita .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) :5576-5583
[9]  
Brescianini D., 2013, NONLINEAR QUADROCOPT
[10]  
Bry Adam, 2011, IEEE International Conference on Robotics and Automation, P723