Stability-constrained Markov Decision Processes using MPC

被引:6
|
作者
Zanon, Mario [1 ]
Gros, Sebastien [2 ]
Palladino, Michele [3 ]
机构
[1] IMT Sch Adv Studies Lucca, Piazza San Francesco 19, I-55100 Lucca, Italy
[2] NTNU, Trondheim, Norway
[3] Univ Aquila, Dept Informat Engn Comp Sci & Math DISIM, via Vetoio, I-67100 Laquila, Italy
关键词
Markov Decision Processes; Model Predictive Control; Stability; Safe reinforcement learning; MODEL-PREDICTIVE CONTROL; SYSTEMS;
D O I
10.1016/j.automatica.2022.110399
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider solving discounted Markov Decision Processes (MDPs) under the constraint that the resulting policy is stabilizing. In practice MDPs are solved based on some form of policy approximation. We will leverage recent results proposing to use Model Predictive Control (MPC) as a structured approximator in the context of Reinforcement Learning, which makes it possible to introduce stability requirements directly inside the MPC-based policy. This will restrict the solution of the MDP to stabilizing policies by construction. Because the stability theory for MPC is most mature for the undiscounted MPC case, we will first show in this paper that stable discounted MDPs can be reformulated as undiscounted ones. This observation will entail that the undiscounted MPC-based policy with stability guarantees will produce the optimal policy for the discounted MDP if it is stable, and the best stabilizing policy otherwise. (C) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Stability-constrained model predictive control
    Cheng, X
    Krogh, BH
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2001, 46 (11) : 1816 - 1820
  • [22] Joint chance-constrained Markov decision processes
    Varagapriya, V.
    Singh, Vikas Vikram
    Lisser, Abdel
    ANNALS OF OPERATIONS RESEARCH, 2023, 322 (02) : 1013 - 1035
  • [23] Strict-sense constrained Markov decision processes
    Hsu, SP
    Arapostathis, A
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 194 - 199
  • [24] HCMDP: a Hierarchical Solution to Constrained Markov Decision Processes
    Feyzabadi, Seyedshams
    Carpin, Stefano
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 3971 - 3978
  • [25] Constrained discounted Markov decision processes and Hamiltonian cycles
    Feinberg, EA
    PROCEEDINGS OF THE 36TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 1997, : 2821 - 2826
  • [26] Constrained discounted Markov decision processes and Hamiltonian Cycles
    Feinberg, EA
    MATHEMATICS OF OPERATIONS RESEARCH, 2000, 25 (01) : 130 - 140
  • [27] Constrained Markov decision processes with first passage criteria
    Yonghui Huang
    Qingda Wei
    Xianping Guo
    Annals of Operations Research, 2013, 206 : 197 - 219
  • [28] Stochastic approximations of constrained discounted Markov decision processes
    Dufour, Francois
    Prieto-Rumeau, Tomas
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2014, 413 (02) : 856 - 879
  • [29] Constrained Markov decision processes with first passage criteria
    Huang, Yonghui
    Wei, Qingda
    Guo, Xianping
    ANNALS OF OPERATIONS RESEARCH, 2013, 206 (01) : 197 - 219
  • [30] STOCHASTIC DOMINANCE-CONSTRAINED MARKOV DECISION PROCESSES
    Haskell, William B.
    Jain, Rahul
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2013, 51 (01) : 273 - 303