Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

被引:31
|
作者
Smith, Laura [1 ]
Kew, J. Chase [2 ]
Peng, Xue Bin [1 ]
Ha, Sehoon [2 ,3 ]
Tang, Jie [2 ]
Levine, Sergey [1 ,2 ]
机构
[1] Univ Calif Berkeley, Berkeley AI Res, Berkeley, CA 90095 USA
[2] Google Res, New York, NY USA
[3] Gcorgia Inst Technol, Atlanta, GA USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022) | 2022年
关键词
D O I
10.1109/ICRA46639.2022.9812166
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Legged robots are physically capable of traversing a wide range of challenging environments, but designing controllers that are sufficiently robust to handle this diversity has been a long-standing challenge in robotics. Reinforcement learning presents an appealing approach for automating the controller design process and has been able to produce remarkably robust controllers when trained in a suitable range of environments. However, it is difficult to predict all likely conditions the robot will encounter during deployment and enumerate them at training-time. What if instead of training controllers that are robust enough to handle any eventuality, we enable the robot to continually learn in any setting it finds itself in? This kind of real-world reinforcement learning poses a number of challenges, including efficiency, safety, and autonomy. To address these challenges, we propose a practical robot reinforcement learning system for fine-tuning locomotion policies in the real world. We demonstrate that a modest amount of real-world training can substantially improve performance during deployment, and this enables a real A1 quadrupedal robot to autonomously fine-tune multiple locomotion skills in a range of environments, including an outdoor lawn and a variety of indoor terrains. (Videos and code1)
引用
收藏
页码:1593 / 1599
页数:7
相关论文
共 50 条
  • [1] Fine-tuning fuzzy control of robots
    Fateh, Mohammad Mehdi
    Fateh, Sara
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2013, 25 (04) : 977 - 987
  • [2] Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning
    Yang, Jingyun
    Mark, Max Sobol
    Vu, Brandon
    Sharma, Archit
    Bohg, Jeannette
    Finn, Chelsea
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4804 - 4811
  • [3] Fine-tuning federal water policies
    Science News, 153 (10):
  • [4] DEFT: Dexterous Fine-Tuning for Hand Policies
    Kannan, Aditya
    Shaw, Kenneth
    Bahl, Shikhar
    Mannam, Pragna
    Pathak, Deepak
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [5] Special Issue "Legged Robots into the Real World"
    Zhou, Chengxu
    ROBOTICS, 2023, 12 (04)
  • [6] Transfer Learning With Adaptive Fine-Tuning
    Vrbancic, Grega
    Podgorelec, Vili
    IEEE ACCESS, 2020, 8 (08): : 196197 - 196211
  • [7] FINE-TUNING THEORY TO THE NEEDS OF THE WORLD - RESPONSE
    SCHIAFFINO, KM
    AMERICAN JOURNAL OF COMMUNITY PSYCHOLOGY, 1991, 19 (01) : 99 - 102
  • [8] Fine-tuning Deep Reinforcement Learning Policies with r-STDP for Domain Adaptation
    Akl, Mahmoud
    Sandamirskaya, Yulia
    Ergene, Deniz
    Walter, Florian
    Knoll, Alois
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON NEUROMORPHIC SYSTEMS 2022, ICONS 2022, 2022,
  • [9] Advances in real-world applications for legged robots
    Bellicoso, C. Dario
    Bjelonic, Marko
    Wellhausen, Lorenz
    Holtmann, Kai
    Guenther, Fabian
    Tranzatto, Marco
    Fankhauser, Peter
    Hutter, Marco
    JOURNAL OF FIELD ROBOTICS, 2018, 35 (08) : 1311 - 1326
  • [10] Investigating Learning Dynamics of BERT Fine-Tuning
    Hao, Yaru
    Dong, Li
    Wei, Furu
    Xu, Ke
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 87 - 92