Identifying Terrain Physical Parameters From Vision - Towards Physical-Parameter-Aware Locomotion and Navigation

被引:0
作者
Chen, Jiaqi [1 ]
Frey, Jonas [1 ,2 ]
Zhou, Ruyi [1 ,3 ]
Miki, Takahiro [1 ]
Martius, Georg [2 ,4 ]
Hutter, Marco [1 ]
机构
[1] Swiss Fed Inst Technol, Robot Syst Lab, CH-8092 Zurich, Switzerland
[2] Max Planck Inst Intelligent Syst Tubingen, D-72076 Tubingen, Germany
[3] Harbin Inst Technol, State Key Lab Robot & Syst, Harbin 150080, Peoples R China
[4] Univ Tubingen, D-72076 Tubingen, Germany
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2024年 / 9卷 / 11期
基金
中国国家自然科学基金; 瑞士国家科学基金会;
关键词
Decoding; Robots; Friction; Visualization; Training; Robot sensing systems; Navigation; Legged Robots; deep learning for visual perception; field robots;
D O I
10.1109/LRA.2024.3455788
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Identifying the physical properties of the surrounding environment is essential for robotic locomotion and navigation to deal with non-geometric hazards, such as slippery and deformable terrains. It would be of great benefit for robots to anticipate these extreme physical properties before contact; however, estimating environmental physical parameters from vision is still an open challenge. Animals can achieve this by using their prior experience and knowledge of what they have seen and how it felt. In this work, we propose a cross-modal self-supervised learning framework for vision-based environmental physical parameter estimation, which paves the way for future physical-property-aware locomotion and navigation. We bridge the gap between existing policies trained in simulation and identification of physical terrain parameters from vision. We propose to train a physical decoder in simulation to predict friction and stiffness from multi-modal input. The trained network allows the labeling of real-world images with physical parameters in a self-supervised manner to further train a visual network during deployment, which can densely predict the friction and stiffness from image data. We validate our physical decoder in simulation and the real world using a quadruped ANYmal robot, outperforming an existing baseline method. We show that our visual network can predict the physical properties in indoor and outdoor experiments while allowing fast adaptation to new environments.
引用
收藏
页码:9279 / 9286
页数:8
相关论文
共 34 条
  • [1] SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
    Achanta, Radhakrishna
    Shaji, Appu
    Smith, Kevin
    Lucchi, Aurelien
    Fua, Pascal
    Suesstrunk, Sabine
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) : 2274 - 2281
  • [2] Scientific exploration of challenging planetary analog environments with a team of legged robots
    Arm, Philip
    Waibel, Gabriel
    Preisig, Jan
    Tuna, Turcan
    Zhou, Ruyi
    Bickel, Valentin
    Ligeza, Gabriela
    Miki, Takahiro
    Kehl, Florian
    Kolvenbach, Hendrik
    Hutter, Marco
    [J]. SCIENCE ROBOTICS, 2023, 8 (80)
  • [3] Bloesch M., 2013, ROBOTICS, V17, P17, DOI DOI 10.7551/MITPRESS/9816.003.0008
  • [4] Emerging Properties in Self-Supervised Vision Transformers
    Caron, Mathilde
    Touvron, Hugo
    Misra, Ishan
    Jegou, Herve
    Mairal, Julien
    Bojanowski, Piotr
    Joulin, Armand
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9630 - 9640
  • [5] How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle Traversability
    Castro, Mateo Guaman
    Triest, Samuel
    Wang, Wenshan
    Gregory, Jason M.
    Sanchez, Felix
    Rogers, John G., III
    Scherer, Sebastian
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 931 - 938
  • [6] Learning quadrupedal locomotion on deformable terrain
    Choi, Suyoung
    Ji, Gwanghyeon
    Park, Jeongsoo
    Kim, Hyeongjun
    Mun, Juhyeok
    Lee, Jeong Hyun
    Hwangbo, Jemin
    [J]. SCIENCE ROBOTICS, 2023, 8 (74)
  • [7] A 2-year locomotive exploration and scientific investigation of the lunar farside by the Yutu-2 rover
    Ding, L.
    Zhou, R.
    Yuan, Y.
    Yang, H.
    Li, J.
    Yu, T.
    Liu, C.
    Wang, J.
    Li, S.
    Gao, H.
    Deng, Z.
    Li, N.
    Wang, Z.
    Gong, Z.
    Liu, G.
    Xie, J.
    Wang, S.
    Rong, Z.
    Deng, D.
    Wang, X.
    Han, S.
    Wan, W.
    Richter, L.
    Huang, L.
    Gou, S.
    Liu, Z.
    Yu, H.
    Jia, Y.
    Chen, B.
    Dang, Z.
    Zhang, K.
    Li, L.
    He, X.
    Liu, S.
    Di, K.
    [J]. SCIENCE ROBOTICS, 2022, 7 (62)
  • [8] Frey J., 2023, P ROB SCI SYST JUL
  • [9] Gabriel B., 2023, P MACH LEARN RES, V229, P2537
  • [10] Real-time Digital Double Framework to Predict Collapsible Terrains for Legged Robots
    Haddeler, Garen
    Palanivelu, Hari P.
    Yung Chuen Ng
    Colonnier, Fabien
    Adiwahono, Albertus H.
    Li, Zhibin
    Chew, Chee-Meng
    Chuah, Meng Yee
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10387 - 10394