Stochastic Optimal Control of Unknown Linear Networked Control System using Q-Learning Methodology

被引:0
作者
Xu, Hao [1 ]
Jagannathan, S. [1 ]
机构
[1] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO 65409 USA
来源
2011 AMERICAN CONTROL CONFERENCE | 2011年
关键词
Networked Control System (NCS); Q-function; Adaptive Estimator (AE); Optimal Control;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Bellman equation is utilized forward-in-time for the stochastic optimal control of Networked Control System (NCS) with unknown system dynamics in the presence of random delays and packet losses which are unknown. The proposed stochastic optimal control approach, referred normally as adaptive dynamic programming, uses an adaptive estimator (AE) and ideas from Q-learning to solve the infinite horizon optimal regulation control of NCS with unknown system dynamics. Update laws for tuning the unknown parameters of the adaptive estimator (AE) online to obtain the time-based Q-function are derived. Lyapunov theory is used to show that all signals are asymptotically stable (AS) and that the approximated control signals converge to optimal control inputs. Simulation results are included to show the effectiveness of the proposed scheme.
引用
收藏
页码:2819 / 2824
页数:6
相关论文
共 50 条
  • [21] Optimal Control for A Class of Linear Stochastic Impulsive Systems with Partially Unknown Information
    Wu, Yan
    Luo, Shixian
    [J]. 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 1768 - 1773
  • [22] Discounted linear Q-learning control with novel tracking cost and its stability
    Wang, Ding
    Ren, Jin
    Ha, Mingming
    [J]. INFORMATION SCIENCES, 2023, 626 : 339 - 353
  • [23] Optimal Tracking Current Control of Switched Reluctance Motor Drives Using Reinforcement Q-Learning Scheduling
    Alharkan, Hamad
    Saadatmand, Sepehr
    Ferdowsi, Mehdi
    Shamsi, Pourya
    [J]. IEEE ACCESS, 2021, 9 : 9926 - 9936
  • [24] On the effect of probing noise in optimal control LQR via Q-learning using adaptive filtering algorithms
    Lopez Yanez, Williams Jesus
    de Souza, Francisco das Chagas
    [J]. EUROPEAN JOURNAL OF CONTROL, 2022, 65
  • [25] Using Q-learning and genetic algorithms to improve the efficiency of weight adjustments for optimal control and design problems
    Kamali, Kaivan
    Jiang, L. J.
    Yen, John
    Wang, K. W.
    [J]. JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2007, 7 (04) : 302 - 308
  • [26] Optimal Control for Interconnected Multi-Area Power Systems With Unknown Dynamics: An Off-Policy Q-Learning Method
    Wang, Jing
    Mi, Xuanrui
    Shen, Hao
    Park, Ju H.
    Shi, Kaibo
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (05) : 2849 - 2853
  • [27] Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach
    Vamvoudakis, Kyriakos G.
    [J]. SYSTEMS & CONTROL LETTERS, 2017, 100 : 14 - 20
  • [28] An Optimal Hybrid Learning Approach for Attack Detection in Linear Networked Control Systems
    Haifeng Niu
    Avimanyu Sahoo
    Chandreyee Bhowmick
    S.Jagannathan
    [J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6 (06) : 1404 - 1416
  • [29] An Optimal Hybrid Learning Approach for Attack Detection in Linear Networked Control Systems
    Niu, Haifeng
    Sahoo, Avimanyu
    Bhowmick, Chandreyee
    Jagannathan, S.
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2019, 6 (06) : 1404 - 1416
  • [30] Adaptive Optimal Control via Q-Learning for Ito Fuzzy Stochastic Nonlinear Continuous-Time Systems With Stackelberg Game
    Ming, Zhongyang
    Zhang, Huaguang
    Yan, Ying
    Yang, Liu
    [J]. IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (04) : 2029 - 2038