AN EFFICIENT DP ALGORITHM ON A TREE-STRUCTURE FOR FINITE HORIZON OPTIMAL CONTROL PROBLEMS

被引:32
作者
Alla, Alessandro [1 ]
Falcone, Maurizio [2 ]
Saluzzi, Luca [3 ]
机构
[1] PUC Rio, Rua Marques de Sao Vicente 225, BR-22451900 Gavea Rio De Janeiro, RJ, Brazil
[2] Sapienza Univ Roma, Piazzale Aldo Moro 5, I-00185 Rome, Italy
[3] Gran Sasso Sci Inst, Viale F Crispi 7, I-67100 Laquila, Italy
关键词
dynamic programming; Hamilton-Jacobi-Bellman equation; optimal control; tree structure; APPROXIMATION; SCHEME;
D O I
10.1137/18M1203900
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The classical dynamic programming (DP) approach to optimal control problems is based on the characterization of the value function as the unique viscosity solution of a HamiltonJacobi-Bellman equation. The DP scheme for the numerical approximation of viscosity solutions of Bellman equations is typically based on a time discretization which is projected on a fixed state-space grid. The time discretization can be done by a one-step scheme for the dynamics and the projection on the grid typically uses a local interpolation. Clearly the use of a grid is a limitation with respect to possible applications in high-dimensional problems due to the curse of dimensionality. Here, we present a new approach for finite horizon optimal control problems where the value function is computed using a DP algorithm with a tree structure algorithm constructed by the time discrete dynamics. In this way there is no need to build a fixed space triangulation and to project on it. The tree will guarantee a perfect matching with the discrete dynamics and drop off the cost of the space interpolation allowing for the solution of very high-dimensional problems. Numerical tests will show the effectiveness of the proposed method.
引用
收藏
页码:A2384 / A2406
页数:23
相关论文
共 33 条
[1]  
Al'brekht E. G., 1961, PMM-J APPL MATH MEC, V25, P254
[2]   ERROR ANALYSIS FOR POD APPROXIMATIONS OF INFINITE HORIZON PROBLEMS VIA THE DYNAMIC PROGRAMMING APPROACH [J].
Alla, A. ;
Falcone, M. ;
Volkwein, S. .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2017, 55 (05) :3091-3115
[3]   AN EFFICIENT POLICY ITERATION ALGORITHM FOR DYNAMIC PROGRAMMING EQUATIONS [J].
Alla, Alessandro ;
Falcone, Maurizio ;
Kalise, Dante .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2015, 37 (01) :A181-A200
[4]  
[Anonymous], 2013, LECT NOTES
[5]  
[Anonymous], 2010, PARTIAL DIFFERENTIAL
[6]  
[Anonymous], 2002, Algorithms for Minimization Without Derivatives
[7]  
[Anonymous], 2010, GRADE STUD MATH
[8]  
Bardi M, 1997, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, V12
[9]  
Bellman R.E., 1957, DYNAMIC PROGRAMMING
[10]  
Boltyanskii V. G., 1956, REP ACAD SCI USSR, V110