Astrobee ISS Free-Flyer Datasets for Space Intra-Vehicular Robot Navigation Research

被引:1
作者
Kang, Suyoung [1 ]
Soussan, Ryan [2 ,3 ]
Lee, Daekyeong [4 ]
Coltin, Brian [2 ,3 ]
Vargas, Andres Mora [2 ,3 ]
Moreira, Marina [2 ,3 ]
Hamilton, Kathryn [2 ]
Garcia, Ruben [2 ,3 ]
Bualat, Maria [2 ]
Smith, Trey [2 ]
Barlow, Jonathan [2 ,3 ]
Benavides, Jose [2 ]
Jeong, Eunju [4 ]
Kim, Pyojin [5 ]
机构
[1] Sookmyung Womens Univ, Dept Elect Engn, Seoul 04310, South Korea
[2] NASA Ames Res Ctr, Moffett Field, CA 94035 USA
[3] KBR Inc, Houston, TX 77058 USA
[4] Sookmyung Womens Univ, Dept Mech Syst Engn, Seoul 04310, South Korea
[5] Gwangju Inst Sci & Technol GIST, Sch Mech Engn, Gwangju 61005, South Korea
关键词
Navigation; Robots; Cameras; Visualization; Robot vision systems; Space vehicles; Location awareness; Data Sets for SLAM; Space Robotics and Automation; SLAM; Autonomous Vehicle Navigation;
D O I
10.1109/LRA.2024.3364834
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We present the first annotated benchmark datasets for evaluating free-flyer visual-inertial localization and mapping algorithms in a zero-g spacecraft interior. The Astrobee free-flying robots that operate inside the International Space Station (ISS) collected the datasets. Space intra-vehicular free-flyers face unique localization challenges: their IMU does not provide a gravity vector, their attitude is fully arbitrary, and they operate in a dynamic, cluttered environment. We extensively evaluate state-of-the-art visual navigation algorithms on these challenging Astrobee datasets, showing superior performance of classical geometry-based methods over recent data-driven approaches. The datasets include monocular images and IMU measurements, with multiple sequences performing a variety of maneuvers and covering four ISS modules. The sensor data is spatio-temporally aligned, and extrinsic/intrinsic calibrations, ground-truth 6-DoF camera poses, and detailed 3D CAD models are included to support evaluation. The datasets are available at: https://astrobee-iss-dataset.github.io/.
引用
收藏
页码:3307 / 3314
页数:8
相关论文
共 30 条
  • [1] [Anonymous], 2024, CIMON ASTRONAUT ASSI
  • [2] Unsupervised Scale-Consistent Depth Learning from Video
    Bian, Jia-Wang
    Zhan, Huangying
    Wang, Naiyan
    Li, Zhichao
    Zhang, Le
    Shen, Chunhua
    Cheng, Ming-Ming
    Reid, Ian
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (09) : 2548 - 2564
  • [3] The EuRoC micro aerial vehicle datasets
    Burri, Michael
    Nikolic, Janosch
    Gohl, Pascal
    Schneider, Thomas
    Rehder, Joern
    Omari, Sammy
    Achtelik, Markus W.
    Siegwart, Roland
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2016, 35 (10) : 1157 - 1163
  • [4] ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM
    Campos, Carlos
    Elvira, Richard
    Gomez Rodriguez, Juan J.
    Montiel, Jose M. M.
    Tardos, Juan D.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) : 1874 - 1890
  • [5] Carlino R, 2019, P INT ASTR C
  • [6] Coltin B, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P4377, DOI 10.1109/IROS.2016.7759644
  • [7] Conceicao B., 2018, P INT S ART INT ROB
  • [8] Ddvernay O., 2008, MACH VISION APPL, V13, P14
  • [9] Direct Sparse Odometry
    Engel, Jakob
    Koltun, Vladlen
    Cremers, Daniel
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) : 611 - 625
  • [10] Fluckiger K., 2018, P INT S ART INT ROB