Deep Reinforcement Learning of Physically Simulated Character Control

被引:0
作者
Liu, Rui [1 ]
Zhang, Bin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
来源
PROCEEDINGS OF 2024 CHINESE INTELLIGENT SYSTEMS CONFERENCE, VOL II, CISC 2024 | 2024年 / 1284卷
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Physically simulated character; Locomotion control;
D O I
10.1007/978-981-97-8654-1_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modeling the motion of humans and animals is a highly challenging problem in artificial intelligence. To Synthesize graceful and lifelike behaviors for physically simulated characters, traditional methods use motion capture, finite state machines, or morphology-specific knowledge to guide motion generation algorithms. While deep reinforcement learning offers broad avenues for synthesizing motion for simulated characters, the quality of synthesized motion in existing work often falls short of manually designed controllers and may exhibit significant artifacts. In this paper, we developed a simulated character control system based on deep reinforcement learning to address the challenges above. The system leverages a moderate amount of motion examples as prior knowledge. It employs reinforcement learning combined with deep neural networks to generate motions that are robust and much closer to a natural person's. Additionally, we use goal-directed reinforcement learning to guide the agent in performing user-specified tasks while imitating reference motion, such as walking to a designated location with a zombie-like gait. To minimize the occurrence of motion artifacts, we introduce the concept of motion mirror symmetry, encouraging symmetrical behavior in the agent by modifying the loss function. We demonstrate the effectiveness of our motion control system using a 3D humanoid robot, showing that our approach can produce symmetrical and lifelike behaviors.
引用
收藏
页码:50 / 61
页数:12
相关论文
共 18 条
[1]   Terrain-Adaptive Locomotion Skills Using Deep Reinforcement Learning [J].
Bin Peng, Xue ;
Berseth, Glen ;
van de Panne, Michiel .
ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (04)
[2]  
Duan Y, 2016, PR MACH LEARN RES, V48
[3]  
Heess N, 2017, Arxiv, DOI [arXiv:1707.02286, DOI 10.48550/ARXIV.1707.02286]
[4]  
Ho J, 2016, ADV NEUR IN, V29
[5]  
Lee Y, 2010, ACM SIGGRAPH 2010 PA, P1
[6]  
Lee Y., 2010, ACM SIGGRAPH ASIA 20, P1
[7]   Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning [J].
Liu, Libin ;
Hodgins, Jessica .
ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (03)
[8]  
Merel J, 2017, Arxiv, DOI arXiv:1707.02201
[9]  
Peng XB, 2021, ACM T GRAPHIC, V40, DOI [10.1145/3197517.3201311, 10.1145/3450626.3459670]
[10]  
Peng XB, 2018, ACM T GRAPHIC, V37, DOI 10.1145/3272127.3275014