Monocular Camera and Single-Beam Sonar-Based Underwater Collision-Free Navigation with Domain Randomization

被引:1
作者
Yang, Pengzhi [1 ]
Liu, Haowen [2 ]
Roznere, Monika [2 ]
Li, Alberto Quattrini [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu 610056, Sichuan, Peoples R China
[2] Dartmouth Coll, Hanover, NH 03755 USA
来源
ROBOTICS RESEARCH, ISRR 2022 | 2023年 / 27卷
关键词
Monocular camera and sonar-based 3D underwater navigation; Low-cost AUV; Deep reinforcement learning; Domain randomization; DRIVEN VISUAL NAVIGATION; REINFORCEMENT;
D O I
10.1007/978-3-031-25555-7_7
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Underwater navigation presents several challenges, including unstructured unknown environments, lack of reliable localization systems (e.g., GPS), and poor visibility. Furthermore, good-quality obstacle detection sensors for underwater robots are scant and costly; and many sensors like RGB-D cameras and LiDAR only work in-air. To enable reliable mapless underwater navigation despite these challenges, we propose a low-cost end-to-end navigation system, based on a monocular camera and a fixed single-beam echo-sounder, that efficiently navigates an underwater robot to waypoints while avoiding nearby obstacles. Our proposed method is based on Proximal Policy Optimization (PPO), which takes as input current relative goal information, estimated depth images, echo-sounder readings, and previous executed actions, and outputs 3D robot actions in a normalized scale. End-to-end training was done in simulation, where we adopted domain randomization (varying underwater conditions and visibility) to learn a robust policy against noise and changes in visibility conditions. The experiments in simulation and real-world demonstrated that our proposed method is successful and resilient in navigating a low-cost underwater robot in unknown underwater environments. The implementation is made publicly available at https://github.com/dartmouthrobotics/deeprl-uw-robot-navigation.
引用
收藏
页码:85 / 101
页数:17
相关论文
共 45 条
[1]   A Revised Underwater Image Formation Model [J].
Akkaynak, Derya ;
Treibitz, Tali .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6723-6732
[2]  
[Anonymous], 2005, Principles of robot motion: theory, algorithms, and implementation
[3]  
[Anonymous], 2006, P IFAC C MAN CONTR M
[4]  
Calado P, 2011, P OCEANS
[5]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[6]  
Cobbe K., 2019, P ICML
[7]   Towards Generalization in Target-Driven Visual Navigation by Using Deep Reinforcement Learning [J].
Devo, Alessandro ;
Mezzetti, Giacomo ;
Costante, Gabriele ;
Fravolini, Mario L. ;
Valigi, Paolo .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (05) :1546-1561
[8]  
Drews P, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P4672, DOI 10.1109/IROS.2016.7759687
[9]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[10]   The dynamic window approach to collision avoidance [J].
Fox, D ;
Burgard, W ;
Thrun, S .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 1997, 4 (01) :23-33