High-speed quadrupedal locomotion by imitation-relaxation reinforcement learning

被引:40
作者
Jin, Yongbin [1 ,2 ,3 ,4 ]
Liu, Xianwei [1 ]
Shao, Yecheng [1 ,4 ]
Wang, Hongtao [1 ,2 ,3 ,4 ]
Yang, Wei [1 ,2 ,3 ,4 ]
机构
[1] Zhejiang Univ, Ctr X Mech, Hangzhou, Peoples R China
[2] Hangzhou Global Sci & Technol Innovat Ctr, ZJU, Hangzhou, Peoples R China
[3] Zhejiang Univ, State Key Lab Fluid Power & Mechatron Syst, Hangzhou, Peoples R China
[4] Zhejiang Univ, Inst Appl Mech, Hangzhou, Peoples R China
关键词
ENTROPY STABILITY; DYNAMICS; DESIGN; ROBOT; MODEL;
D O I
10.1038/s42256-022-00576-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fast and stable locomotion of legged robots involves demanding and contradictory requirements, in particular rapid control frequency as well as an accurate dynamics model. Benefiting from universal approximation ability and offline optimization of neural networks, reinforcement learning has been used to solve various challenging problems in legged robot locomotion; however, the optimal control of quadruped robot requires optimizing multiple objectives such as keeping balance, improving efficiency, realizing periodic gait and following commands. These objectives cannot always be achieved simultaneously, especially at high speed. Here, we introduce an imitation-relaxation reinforcement learning (IRRL) method to optimize the objectives in stages. To bridge the gap between simulation and reality, we further introduce the concept of stochastic stability into system robustness analysis. The state space entropy decreasing rate is a quantitative metric and can sharply capture the occurrence of period-doubling bifurcation and possible chaos. By employing IRRL in training and the stochastic stability analysis, we are able to demonstrate a stable running speed of 5.0 m s(-1) for a MIT-MiniCheetah-like robot.
引用
收藏
页码:1198 / 1208
页数:11
相关论文
共 59 条
[31]   Kolmogorov-Sinai entropy rate versus physical entropy [J].
Latora, V ;
Baranger, M .
PHYSICAL REVIEW LETTERS, 1999, 82 (03) :520-523
[32]  
Lee J, 2014, IEEE INT C INT ROBOT, P4907, DOI 10.1109/IROS.2014.6943260
[33]  
Lee J, 2019, Arxiv, DOI arXiv:1901.07517
[34]   Learning quadrupedal locomotion over challenging terrain [J].
Lee, Joonho ;
Hwangbo, Jemin ;
Wellhausen, Lorenz ;
Koltun, Vladlen ;
Hutter, Marco .
SCIENCE ROBOTICS, 2020, 5 (47)
[35]  
Lee S., 2021, ACM Transactions on Graphics (TOG), V40, P1
[36]   Push-Recovery Stability of Biped Locomotion [J].
Lee, Yoonsang ;
Lee, Kyungho ;
Kwon, Soon-Sun ;
Jeong, Jiwon ;
O'Sullivan, Carol ;
Park, Moon Seok ;
Lee, Jehee .
ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (06)
[37]   CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion [J].
Luo, Ying-Sheng ;
Soeseno, Jonathan Hans ;
Chen, Trista Pei-Chun ;
Chen, Wei-Chao .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[38]  
Margolis G. B., 2022, P ROBOTICS SCI SYSTE
[39]   Learning robust perceptive locomotion for quadrupedal robots in the wild [J].
Miki, Takahiro ;
Lee, Joonho ;
Hwangbo, Jemin ;
Wellhausen, Lorenz ;
Koltun, Vladlen ;
Hutter, Marco .
SCIENCE ROBOTICS, 2022, 7 (62)
[40]   High-speed bounding with the MIT Cheetah 2: Control design and experiments [J].
Park, Hae-Won ;
Wensing, Patrick M. ;
Kim, Sangbae .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (02) :167-192