共 50 条
Reinforcement learning-based optimal control for Markov jump systems with completely unknown dynamics
被引:6
|作者:
Shi, Xiongtao
[1
,2
]
Li, Yanjie
[1
,2
]
Du, Chenglong
[3
]
Chen, Chaoyang
[4
]
Zong, Guangdeng
[5
]
Gui, Weihua
[3
]
机构:
[1] Harbin Inst Technol Shenzhen, Guangdong Key Lab Intelligent Morphing Mech & Adap, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol Shenzhen, Sch Mech Engn & Automat, Shenzhen 518055, Peoples R China
[3] Cent South Univ, Sch Automat, Changsha 410083, Peoples R China
[4] Hunan Univ Sci & Technol, Sch Informat & Elect Engn, Xiangtan 411201, Peoples R China
[5] Tiangong Univ, Sch Control Sci & Engn, Tianjin 300387, Peoples R China
来源:
关键词:
Markov jump systems;
Optimal control;
Coupled algebraic Riccati equation;
Parallel policy iteration;
Reinforcement learning;
ADAPTIVE OPTIMAL-CONTROL;
TRACKING CONTROL;
LINEAR-SYSTEMS;
D O I:
10.1016/j.automatica.2024.111886
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
In this paper, the optimal control problem of a class of unknown Markov jump systems (MJSs) is investigated via the parallel policy iteration-based reinforcement learning (PPI-RL) algorithms. First, by solving the linear parallel Lyapunov equation, a model-based PPI-RL algorithm is studied to learn the solution of nonlinear coupled algebraic Riccati equation (CARE) of MJSs with known dynamics, thereby updating the optimal control gain. Then, a novel partially model-free PPI-RL algorithm is proposed for the scenario that the dynamics of the MJS is partially unknown, in which the optimal solution of CARE is learned via the mixed input-output data of all modes. Furthermore, for the MJS with completely unknown dynamics, a completely model-free PPI-RL algorithm is developed to get the optimal control gain by removing the dependence of model information in the process of solving the optimal solution of CARE. It is proved that the proposed PPI-RL algorithms converge to the unique optimal solution of CARE for MJSs with known, partially unknown, and completely unknown dynamics, respectively. Finally, simulation results are illustrated to show the feasibility and effectiveness of the PPI-RL algorithms.
引用
收藏
页数:8
相关论文