Monte Carlo tree search control scheme for multibody dynamics applications

被引:2
作者
Tang, Yixuan [1 ]
Orzechowski, Grzegorz [1 ]
Prokop, Ales [2 ]
Mikkola, Aki [1 ]
机构
[1] LUT Univ, Dept Mech Engn, Lappeenranta 53850, Finland
[2] Brno Univ Technol, Fac Mech Engn, Technicka 2896-2, Brno 61669, Czech Republic
关键词
Monte Carlo Tree Search; Multibody dynamics; Reward functions; Parametric analysis; Artificial intelligence control; Inverted pendulum; DOUBLE PENDULUM; SWING-UP; STRATEGY; GAME; GO;
D O I
10.1007/s11071-024-09509-8
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
There is considerable interest in applying reinforcement learning (RL) to improve machine control across multiple industries, and the automotive industry is one of the prime examples. Monte Carlo Tree Search (MCTS) has emerged and proven powerful in decision-making games, even without understanding the rules. In this study, multibody system dynamics (MSD) control is first modeled as a Markov Decision Process and solved with Monte Carlo Tree Search. Based on randomized search space exploration, the MCTS framework builds a selective search tree by repeatedly applying a Monte Carlo rollout at each child node. However, without a library of available choices, deciding among the many possibilities for agent parameters can be intimidating. In addition, the MCTS poses a significant challenge for searching due to the large branching factor. This challenge is typically overcome by appropriate parameter design, search guiding, action reduction, parallelization, and early termination. To address these shortcomings, the overarching goal of this study is to provide needed insight into inverted pendulum controls via vanilla and modified MCTS agents, respectively. A series of reward functions are well-designed according to the control goal, which maps a specific distribution shape of reward bonus and guides the MCTS-based control to maintain the upright position. Numerical examples show that the reward-modified MCTS algorithms significantly improve the control performance and robustness of the default choice of a constant reward that constitutes the vanilla MCTS. The exponentially decaying reward functions perform better than the constant value or polynomial reward functions. Moreover, the exploitation vs. exploration trade-off and discount parameters are carefully tested. The study's results can guide the research of RL-based MSD users.
引用
收藏
页码:8363 / 8391
页数:29
相关论文
共 42 条
[1]   Deep learning of multibody minimal coordinates for state and input estimation with Kalman filtering [J].
Angeli, Andrea ;
Desmet, Wim ;
Naets, Frank .
MULTIBODY SYSTEM DYNAMICS, 2021, 53 (02) :205-223
[2]  
[Anonymous], 2016, Openai gym
[3]   NEURONLIKE ADAPTIVE ELEMENTS THAT CAN SOLVE DIFFICULT LEARNING CONTROL-PROBLEMS [J].
BARTO, AG ;
SUTTON, RS ;
ANDERSON, CW .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05) :834-846
[4]   Julia: A Fresh Approach to Numerical Computing [J].
Bezanson, Jeff ;
Edelman, Alan ;
Karpinski, Stefan ;
Shah, Viral B. .
SIAM REVIEW, 2017, 59 (01) :65-98
[5]  
Chen, 2022, P INT C AUTOMATED PL, P35
[6]   Data-driven simulation for general-purpose multibody dynamics using Deep Neural Networks [J].
Choi, Hee-Sun ;
An, Junmo ;
Han, Seongji ;
Kim, Jin-Gyun ;
Jung, Jae-Yoon ;
Choi, Juhwan ;
Orzechowski, Grzegorz ;
Mikkola, Aki ;
Choi, Jin Hwan .
MULTIBODY SYSTEM DYNAMICS, 2021, 51 (04) :419-454
[7]  
Christiano PF, 2017, ADV NEUR IN, V30
[8]  
Coulom R, 2007, LECT NOTES COMPUT SC, V4630, P72
[9]  
Dewan MC, 2016, PEDIATR NEUROSURG, P1, DOI DOI 10.1159/00045280
[10]   Adaptive playouts for online learning of policies during Monte Carlo Tree Search [J].
Graf, Tobias ;
Platzner, Marco .
THEORETICAL COMPUTER SCIENCE, 2016, 644 :53-62