CONSTRAINED AND UNCONSTRAINED OPTIMAL DISCOUNTED CONTROL OF PIECEWISE DETERMINISTIC MARKOV PROCESSES

被引:20
作者
Costa, O. L. V. [1 ]
Dufour, F. [2 ]
Piunovskiy, A. B. [3 ]
机构
[1] Univ Sao Paulo, Escola Politecn, Dept Engn Telecomunicacoes & Controle, BR-05508900 Sao Paulo, Brazil
[2] Univ Bordeaux, Inst Polytech Bordeaux, INRIA Bordeaux Sud Ouest, Team CQFD,IMB,Inst Math Bordeaux, Bordeaux, France
[3] Univ Liverpool, Dept Math Sci, Liverpool L69 7ZL, Merseyside, England
基金
巴西圣保罗研究基金会; 英国工程与自然科学研究理事会;
关键词
unconstrained/constrained control problem; continuous control; piecewise; deterministic; Markov process; continuous-time Markov decision process; discounted cost; DISCRETE-TIME;
D O I
10.1137/140996380
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The main goal of this paper is to study the in finite-horizon expected discounted continuous-time optimal control problem of piecewise deterministic Markov processes with the control acting continuously on the jump intensity lambda and on the transition measure Q of the process but not on the deterministic flow phi. The contributions of the paper are for the unconstrained as well as the constrained cases. The set of admissible control strategies is assumed to be formed by policies, possibly randomized and depending on the history of the process, taking values in a set valued action space. For the unconstrained case we provide sufficient conditions based on the three local characteristics of the process phi, lambda, Q and the semicontinuity properties of the set valued action space, to guarantee the existence and uniqueness of the integro-differential optimality equation (the so-called Bellman Hamilton Jacobi equation) as well as the existence of an optimal (and delta-optimal, as well) deterministic stationary control strategy for the problem. For the constrained case we show that the values of the constrained control problem and an associated in finite dimensional linear programming (LP) problem are the same, and moreover we provide sufficient conditions for the solvability of the LP problem as well as for the existence of an optimal feasible randomized stationary control strategy for the constrained problem.
引用
收藏
页码:1444 / 1474
页数:31
相关论文
共 50 条
[41]   Deterministic advection-diffusion model based on Markov processes [J].
Ferreira, JS ;
Costa, M .
JOURNAL OF HYDRAULIC ENGINEERING-ASCE, 2002, 128 (04) :399-411
[42]   NOTE ON DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES WITH A LOWER BOUNDING FUNCTION [J].
Guo, Xin ;
Piunovskiy, Alexey ;
Zhang, Yi .
JOURNAL OF APPLIED PROBABILITY, 2017, 54 (04) :1071-1088
[43]   Ergodic properties of some piecewise-deterministic Markov process with application to gene expression modelling [J].
Czapla, Dawid ;
Horbacz, Katarzyna ;
Wojewodka-Sciazko, Hanna .
STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2020, 130 (05) :2851-2885
[44]   Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach [J].
Dufour, F. ;
Piunovskiy, A. B. .
APPLIED MATHEMATICS AND OPTIMIZATION, 2016, 74 (01) :129-161
[45]   Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach [J].
F. Dufour ;
A. B. Piunovskiy .
Applied Mathematics & Optimization, 2016, 74 :129-161
[47]   Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates [J].
Guo, Xianping ;
Piunovskiy, Alexei .
MATHEMATICS OF OPERATIONS RESEARCH, 2011, 36 (01) :105-132
[48]   First passage models for denumerable semi-Markov decision processes with nonnegative discounted costs [J].
Huang, Yong-hui ;
Guo, Xian-ping .
ACTA MATHEMATICAE APPLICATAE SINICA-ENGLISH SERIES, 2011, 27 (02) :177-190
[49]   RISK-SENSITIVE DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES WITH UNBOUNDED RATES [J].
Guo, Xianping ;
Liao, Zhong-Wei .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2019, 57 (06) :3857-3883
[50]   Risk-sensitive discounted Markov decision processes with unbounded reward functions and Borel spaces [J].
Guo, Xin .
STOCHASTICS-AN INTERNATIONAL JOURNAL OF PROBABILITY AND STOCHASTIC PROCESSES, 2024, 96 (01) :649-666