Efficient Robot Skills Learning with Weighted Near-Optimal Experiences Policy Optimization

被引:3
作者
Hou, Liwei [1 ]
Wang, Hengsheng [1 ,2 ]
Zou, Haoran [1 ]
Wang, Qun [1 ,3 ]
机构
[1] Cent South Univ, Coll Mech & Elect Engn, Changsha 410083, Peoples R China
[2] Cent South Univ, State Key Lab High Performance Complex Mfg, Changsha 410083, Peoples R China
[3] Hunan Univ, Modern Engn Training Ctr, Changsha 410082, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 03期
关键词
robot skills learning; policy learning; policy gradient; experience; data efficiency; LOCOMOTION;
D O I
10.3390/app11031131
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Autonomous learning of robotic skills seems to be more natural and more practical than engineered skills, analogous to the learning process of human individuals. Policy gradient methods are a type of reinforcement learning technique which have great potential in solving robot skills learning problems. However, policy gradient methods require too many instances of robot online interaction with the environment in order to learn a good policy, which means lower efficiency of the learning process and a higher likelihood of damage to both the robot and the environment. In this paper, we propose a two-phase (imitation phase and practice phase) framework for efficient learning of robot walking skills, in which we pay more attention to the quality of skill learning and sample efficiency at the same time. The training starts with what we call the first stage or the imitation phase of learning, updating the parameters of the policy network in a supervised learning manner. The training set used in the policy network learning is composed of the experienced trajectories output by the iterative linear Gaussian controller. This paper also refers to these trajectories as near-optimal experiences. In the second stage, or the practice phase, the experiences for policy network learning are collected directly from online interactions, and the policy network parameters are updated with model-free reinforcement learning. The experiences from both stages are stored in the weighted replay buffer, and they are arranged in order according to the experience scoring algorithm proposed in this paper. The proposed framework is tested on a biped robot walking task in a MATLAB simulation environment. The results show that the sample efficiency of the proposed framework is much higher than ordinary policy gradient algorithms. The algorithm proposed in this paper achieved the highest cumulative reward, and the robot learned better walking skills autonomously. In addition, the weighted replay buffer method can be made as a general module for other model-free reinforcement learning algorithms. Our framework provides a new way to combine model-based reinforcement learning with model-free reinforcement learning to efficiently update the policy network parameters in the process of robot skills learning.
引用
收藏
页码:1 / 20
页数:18
相关论文
共 33 条
  • [1] Abbeel Pieter, 2006, Advances in neural information processing systems, P1
  • [2] AUTOMATIC-CONTROL OF CANAL FLOW USING LINEAR QUADRATIC REGULATOR THEORY
    BALOGUN, OS
    HUBBARD, M
    DEVRIES, JJ
    [J]. JOURNAL OF HYDRAULIC ENGINEERING-ASCE, 1988, 114 (01): : 75 - 102
  • [3] CAMBANIS S, 1973, IEEE T INFORM THEORY, V19, P110, DOI 10.1109/TIT.1973.1054943
  • [4] Learning, Improving, and Generalizing Motor Skills for the Peg-in-Hole Tasks Based on Imitation Learning and Self-Learning
    Cho, Nam Jun
    Lee, Sang Hyoung
    Kim, Jong Bok
    Suh, Il Hong
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (08):
  • [5] Planning biped locomotion using motion capture data and probabilistic roadmaps
    Choi, MG
    Lee, J
    Shin, SY
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2003, 22 (02): : 182 - 203
  • [6] Clavera I., 2018, ABS180905214 ARXIV
  • [7] Heess N., 2017, ARXIV2017170702286
  • [8] Kajita S., 2014, Introduction to humanoid robotics
  • [9] Kalashnikov D., 2018, P C ROB LEARN, P651
  • [10] Kaneko K, 2002, 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, P2431, DOI 10.1109/IRDS.2002.1041632