Learning Symmetric and Low-Energy Locomotion

被引:132
作者
Yu, Wenhao [1 ]
Turk, Greg [1 ]
Liu, C. Karen [1 ]
机构
[1] Georgia Inst Technol, Sch Interact Comp, Atlanta, GA 30332 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2018年 / 37卷 / 04期
关键词
locomotion; reinforcement learning; curriculum learning; PHYSICS;
D O I
10.1145/3197517.3201397
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Learning locomotion skills is a challenging problem. To generate realistic and smooth locomotion, existing methods use motion capture, finite state machines or morphology-specific knowledge to guide the motion generation algorithms. Deep reinforcement learning (DRL) is a promising approach for the automatic creation of locomotion control. Indeed, a standard benchmark for DRL is to automatically create a running controller for a biped character from a simple reward function [Duan et al. 2016]. Although several different DRL algorithms can successfully create a running controller, the resulting motions usually look nothing like a real runner. This paper takes a minimalist learning approach to the locomotion problem, without the use of motion examples, finite state machines, or morphology-specific knowledge. We introduce two modifications to the DRL approach that, when used together, produce locomotion behaviors that are symmetric, low-energy, and much closer to that of a real person. First, we introduce a new term to the loss function (not the reward function) that encourages symmetric actions. Second, we introduce a new curriculum learning method that provides modulated physical assistance to help the character with left/right balance and forward movement. The algorithm automatically computes appropriate assistance to the character and gradually relaxes this assistance, so that eventually the character learns to move entirely without help. Because our method does not make use of motion capture data, it can be applied to a variety of character morphologies. We demonstrate locomotion controllers for the lower half of a biped, a full humanoid, a quadruped, and a hexapod. Our results show that learned policies are able to produce symmetric, low-energy gaits. In addition, speed-appropriate gait patterns emerge without any guidance from motion examples or contact planning.
引用
收藏
页数:12
相关论文
共 63 条
[1]   Trajectory Optimization for Full-Body Movements with Complex Contacts [J].
Al Borno, Mazen ;
de Lasa, Martin ;
Hertzmann, Aaron .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2013, 19 (08) :1405-1414
[2]  
[Anonymous], 2009, ACM SIGGRAPH 2009 PA, DOI DOI 10.1145/1576246.1531366
[3]  
[Anonymous], 2016, PYDART2
[4]  
[Anonymous], 2017, OpenAI Baselines
[5]  
[Anonymous], GITGVU15011 SCH INT
[6]  
[Anonymous], 2016, CORR
[7]  
[Anonymous], 2016, P 4 INT C LEARN REPR
[8]  
[Anonymous], ACM T GRAPH
[9]  
[Anonymous], 2017, ABS170702286 CORR
[10]  
[Anonymous], 2017, ARXIV170700183