Real-world humanoid locomotion with reinforcement learning

被引:15
|
作者
Radosavovic, Ilija [1 ]
Xiao, Tete [1 ]
Zhang, Bike [1 ]
Darrell, Trevor [1 ]
Malik, Jitendra [1 ]
Sreenath, Koushil [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
DYNAMICS;
D O I
10.1126/scirobotics.adi9579
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at home, and colonize new planets. Although classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for real-world humanoid locomotion. Our controller is a causal transformer that takes the history of proprioceptive observations and actions as input and predicts the next action. We hypothesized that the observation-action history contains useful information about the world that a powerful transformer model can use to adapt its behavior in context, without updating its weights. We trained our model with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed it to the real-world zero-shot. Our controller could walk over various outdoor terrains, was robust to external disturbances, and could adapt in context.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Real-world reinforcement learning for autonomous humanoid robot docking
    Navarro-Guerrero, Nicolas
    Weber, Cornelius
    Schroeter, Pascal
    Wermter, Stefan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2012, 60 (11) : 1400 - 1407
  • [2] A Real-World Quadrupedal Locomotion Benchmark for Offline Reinforcement Learning
    Zhang, Hongyin
    Yang, Shuyu
    Wang, Donglin
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [3] Deep reinforcement learning for real-world quadrupedal locomotion: a comprehensive review
    Zhang, Hongyin
    He, Li
    Wang, Donglin
    INTELLIGENCE & ROBOTICS, 2022, 2 (03): : 275 - 297
  • [4] Real-World Reinforcement Learning via Multifidelity Simulators
    Cutler, Mark
    Walsh, Thomas J.
    How, Jonathan P.
    IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (03) : 655 - 671
  • [5] Reinforcement Learning in Robotics: Applications and Real-World Challenges
    Kormushev, Petar
    Calinon, Sylvain
    Caldwell, Darwin G.
    ROBOTICS, 2013, 2 (03): : 122 - 148
  • [6] Offline Learning of Counterfactual Predictions for Real-World Robotic Reinforcement Learning
    Jin, Jun
    Graves, Daniel
    Haigh, Cameron
    Luo, Jun
    Jagersand, Martin
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3616 - 3623
  • [7] Setting up a Reinforcement Learning Task with a Real-World Robot
    Mahmood, A. Rupam
    Korenkevych, Dmytro
    Komer, Brent J.
    Bergstra, James
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 4635 - 4640
  • [8] NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
    Qin, Rong-Jun
    Zhang, Xingyuan
    Gao, Songyi
    Chen, Xiong-Hui
    Li, Zewen
    Zhang, Weinan
    Yu, Yang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [9] Toward the confident deployment of real-world reinforcement learning agents
    Hanna, Josiah P.
    AI MAGAZINE, 2024, 45 (03) : 396 - 403
  • [10] Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
    Dulac-Arnold, Gabriel
    Levine, Nir
    Mankowitz, Daniel J.
    Li, Jerry
    Paduraru, Cosmin
    Gowal, Sven
    Hester, Todd
    MACHINE LEARNING, 2021, 110 (09) : 2419 - 2468