Robust quadruped jumping via deep reinforcement learning

被引:4
作者
Bellegarda, Guillaume [1 ]
Nguyen, Chuong [2 ]
Nguyen, Quan [2 ]
机构
[1] Ecole Polytech Fed Lausanne EPFL, CH-1015 Lausanne, VD, Switzerland
[2] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
Quadruped jumping; Reinforcement learning; Trajectory optimization; Agile robots; CHEETAH;
D O I
10.1016/j.robot.2024.104799
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider a general task of jumping varying distances and heights for a quadrupedal robot in noisy environments, such as off of uneven terrain and with variable robot dynamics parameters. To accurately jump in such conditions, we propose a framework using deep reinforcement learning that leverages and augments the complex solution of nonlinear trajectory optimization for quadrupedal jumping. While the standalone optimization limits jumping to take-off from flat ground and requires accurate assumptions of robot dynamics, our proposed approach improves the robustness to allow jumping off of significantly uneven terrain with variable robot dynamical parameters and environmental conditions. Compared with walking and running, the realization of aggressive jumping on hardware necessitates accounting for the motors' torque-speed relationship as well as the robot's total power limits. By incorporating these constraints into our learning framework, we successfully deploy our policy sim-to-real without further tuning, fully exploiting the available onboard power supply and motors. We demonstrate robustness to environment noise of foot disturbances of up to 6 cm in height, or 33% of the robot's nominal standing height, while jumping 2x the body length in distance.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning-Based Locomotion Controllers for Quadruped Robots in Indoor Stair Climbing via Deep Reinforcement Learning
    Sinsukudomchai, Tanawit
    Deelertpaiboon, Chirdpong
    2024 21ST INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING/ELECTRONICS, COMPUTER, TELECOMMUNICATIONS AND INFORMATION TECHNOLOGY, ECTI-CON 2024, 2024,
  • [2] Resilient Dynamic Channel Access via Robust Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    IEEE ACCESS, 2021, 9 : 163188 - 163203
  • [3] Reinforcement Learning for Quadruped Locomotion
    Zhao, Kangqiao
    Lin, Feng
    Seah, Hock Soon
    ADVANCES IN COMPUTER GRAPHICS, CGI 2021, 2021, 13002 : 167 - 177
  • [4] Robust Control of Quadruped Robots using Reinforcement Learning and Depth Completion Network
    Xu, Ruonan
    Guo, Bin
    Zhao, Kaixing
    Jing, Yao
    Ding, Yasan
    Yu, Zhiwen
    PROCEEDINGS OF THE 2024 ADAAIOTSYS 2024-WORKSHOP ON ADAPTIVE AIOT SYSTEMS, ADAAIOTSYS 2024, 2024, : 7 - 12
  • [5] Adversary Agnostic Robust Deep Reinforcement Learning
    Qu, Xinghua
    Gupta, Abhishek
    Ong, Yew-Soon
    Sun, Zhu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 6146 - 6157
  • [6] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Xuan, Junyu
    Lu, Jie
    Yan, Zheng
    Zhang, Guangquan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2019, 12 (01) : 164 - 171
  • [7] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [8] Learning Battles in ViZDoom via Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Li, Nannan
    Zhu, Yuanheng
    PROCEEDINGS OF THE 2018 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG'18), 2018, : 389 - 392
  • [9] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403
  • [10] Toward robust and scalable deep spiking reinforcement learning
    Akl, Mahmoud
    Ergene, Deniz
    Walter, Florian
    Knoll, Alois
    FRONTIERS IN NEUROROBOTICS, 2023, 16