Stochastic Optimal Regulation of Nonlinear Networked Control Systems by Using Event-Driven Adaptive Dynamic Programming

被引:47
作者
Sahoo, Avimanyu [1 ]
Jagannathan, Sarangapani [2 ]
机构
[1] DEI Grp, Millersville, MD 21108 USA
[2] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO 65409 USA
基金
美国国家科学基金会;
关键词
Adaptive dynamic programming (ADP); event sampled control; neural networks (NNs); optimal control; TRIGGERED CONTROL; DESIGN;
D O I
10.1109/TCYB.2016.2519445
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, an event-driven stochastic adaptive dynamic programming (ADP)-based technique is introduced for nonlinear systems with a communication network within its feedback loop. A near optimal control policy is designed using an actor-critic framework and ADP with event sampled state vector. First, the system dynamics are approximated by using a novel neural network (NN) identifier with event sampled state vector. The optimal control policy is generated via an actor NN by using the NN identifier and value function approximated by a critic NN through ADP. The stochastic NN identifier, actor, and critic NN weights are tuned at the event sampled instants leading to aperiodic weight tuning laws. Above all, an adaptive event sampling condition based on estimated NN weights is designed by using the Lyapunov technique to ensure ultimate boundedness of all the closed-loop signals along with the approximation accuracy. The net result is event-driven stochastic ADP technique that can significantly reduce the computation and network transmissions. Finally, the analytical design is substantiated with simulation results.
引用
收藏
页码:425 / 438
页数:14
相关论文
共 23 条
[1]   Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :943-949
[2]  
[Anonymous], 2000, Dynamic programming and optimal control
[3]  
[Anonymous], 2014, P 2014 INT C ADV ELE
[4]   Online Optimal Control of Affine Nonlinear Discrete-Time Systems With Unknown Internal Dynamics by Using Time-Based Policy Update [J].
Dierks, Travis ;
Jagannathan, Sarangapani .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (07) :1118-1129
[5]  
Donkers M., 2011, THESIS
[6]   Stochastic optimal control and analysis of stability of networked control systems with long delay [J].
Hu, SS ;
Zhu, QX .
AUTOMATICA, 2003, 39 (11) :1877-1884
[7]  
Imer O.C., 2006, Proceedings of the American Control Conference, P14
[8]   A STOCHASTIC REGULATOR FOR INTEGRATED COMMUNICATION AND CONTROL-SYSTEMS .1. FORMULATION OF CONTROL LAW [J].
LIOU, LW ;
RAY, A .
JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME, 1991, 113 (04) :604-611
[9]   Finite-Approximation-Error-Based Optimal Control Approach for Discrete-Time Nonlinear Systems [J].
Liu, Derong ;
Wei, Qinglai .
IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (02) :779-789
[10]  
Liu J, 2013, PROCEEDINGS OF THE 2013 INTERNATIONAL CONFERENCE ON ENERGY, P16, DOI 10.1109/AMS.2013.9