Vision-Only Robot Navigation in a Neural Radiance World

被引:118
作者
Adamkiewicz, Michal [1 ]
Chen, Timothy [2 ]
Caccavale, Adam [3 ]
Gardner, Rachel [1 ]
Culbertson, Preston [3 ]
Bohg, Jeannette [1 ]
Schwager, Mac [2 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Stanford Univ, Dept Aeronaut & Astronaut, Stanford, CA 94305 USA
[3] Stanford Univ, Dept Mech Engn, Stanford, CA 94305 USA
基金
美国国家科学基金会;
关键词
Collision avoidance; localization; motion and path planning; vision-based navigation; neural radiance fields; TRAJECTORY GENERATION; OPTIMIZATION; FIELDS;
D O I
10.1109/LRA.2022.3150497
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Neural Radiance Fields (NeRFs) have recently emerged as a powerful paradigm for the representation of natural, complex 3D scenes. Neural Radiance Fields (NeRFs) represent continuous volumetric density and RGB values in a neural network, and generate photo-realistic images from unseen camera viewpoints through ray tracing. We propose an algorithm for navigating a robot through a 3D environment represented as a NeRF using only an onboard RGB camera for localization. We assume the NeRF for the scene has been pre-trained offline, and the robot's objective is to navigate through unoccupied space in the NeRF to reach a goal pose. We introduce a trajectory optimization algorithm that avoids collisions with high-density regions in the NeRF based on a discrete time version of differential flatness that is amenable to constraining the robot's full pose and control inputs. We also introduce an optimization based filtering method to estimate 6DoF pose and velocities for the robot in the NeRF given only an onboard RGB camera. We combine the trajectory planner with the pose filter in an online replanning loop to give a vision-based robot navigation pipeline. We present simulation results with a quadrotor robot navigating through a jungle gym environment, the inside of a church, and Stonehenge using only an RGB camera. We also demonstrate an omnidirectional ground robot navigating through the church, requiring it to reorient to fit through a narrow gap.
引用
收藏
页码:4606 / 4613
页数:8
相关论文
共 39 条
[1]  
Angeris G., 2019, P INT S ROB RES
[2]  
[Anonymous], 2020, C ROB LEARN CORL
[3]  
Betts JT, 2010, ADV DES CONTROL, P1, DOI 10.1137/1.9780898718577
[4]   Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction [J].
Gafni, Guy ;
Thies, Justus ;
Zollhoefer, Michael ;
Niessner, Matthias .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8645-8654
[5]  
Garbin S. J., 2021, P IEEECVF INT C COMP, P14346
[6]  
Han LX, 2019, IEEE INT C INT ROBOT, P4423, DOI [10.1109/IROS40897.2019.8968199, 10.1109/iros40897.2019.8968199]
[7]   An Introduction to Trajectory Optimization: How to Do Your Own Direct Collocation [J].
Kelly, Matthew .
SIAM REVIEW, 2017, 59 (04) :849-904
[8]   Learning high-speed flight in the wild [J].
Loquercio, Antonio ;
Kaufmann, Elia ;
Ranftl, Rene ;
Mueller, Matthias ;
Koltun, Vladlen ;
Scaramuzza, Davide .
SCIENCE ROBOTICS, 2021, 6 (59)
[9]   Deep Drone Racing: From Simulation to Reality With Domain Randomization [J].
Loquercio, Antonio ;
Kaufmann, Elia ;
Ranftl, Rene ;
Dosovitskiy, Alexey ;
Koltun, Vladlen ;
Scaramuzza, Davide .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (01) :1-14
[10]  
Mainprice J, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P3156, DOI 10.1109/IROS.2016.7759488