THE RELATIONSHIP BETWEEN THE MAXIMUM PRINCIPLE AND DYNAMIC-PROGRAMMING

被引:93
作者
CLARKE, FH [1 ]
VINTER, RB [1 ]
机构
[1] UNIV LONDON IMPERIAL COLL SCI & TECHNOL,DEPT ELECT ENGN,LONDON SW7 2BT,ENGLAND
关键词
CONTROL SYSTEMS; OPTIMAL; -; Theory;
D O I
10.1137/0325071
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Let V(t,x) be the infimum cost of an optimal control problem, viewed as a function of the initial time and state (t,x). Dynamic programming is concerned with the properties of V and in particular with its characterization as a solution to the Hamilton-Jacobi-Bellman equation. Heuristic arguments have long been advanced relating the maximum principle to dynamic programming according to p(t) equals minus V//x(t,x//0(t)). Here x//0 is the minimizing state function under consideration and p is the costate function of the maximum principle. In this paper we examine the validity of such claims and find that this relationship, interpreted as a differential inclusion involving the generalized gradient, is indeed true, almost everywhere and at the endpoints, for a very large class of nonsmooth optimal control problems.
引用
收藏
页码:1291 / 1311
页数:21
相关论文
共 14 条
[1]  
Barbu V., 1984, RES NOTES MATH, V100
[2]  
Berkovitz L.D., 2013, NONLINEAR OPTIMAL CO, DOI [10.1002/9783527639700.ch5, DOI 10.1007/978-1-4757-6097-2]
[3]  
BRYSON A, 1969, APPLIED OPTIMAL CONT
[4]  
CITRON SJ, 1969, ELEMENTS OPTIMAL CON
[5]   LOCAL OPTIMALITY CONDITIONS AND LIPSCHITZIAN SOLUTIONS TO THE HAMILTON-JACOBI EQUATION [J].
CLARKE, FH ;
VINTER, RB .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1983, 21 (06) :856-870
[6]  
Clarke FH., 1983, OPTIMIZATION NONSMOO, P357
[7]  
CLARKE FH, 1985, CRM1300 U MONTR TECH
[8]  
Doob J. L., 1953, STOCHASTIC PROCESSES, V101
[9]  
Fleming W.H., 1975, DETERMINISTIC STOCHA
[10]  
Jacobs O. L. R., 1974, INTRO CONTROL THEORY