Cooperative Model-Based Reinforcement Learning for Approximate Optimal Tracking

被引:0
|
作者
Greene, Max L. [1 ]
Bell, Zachary, I [2 ]
Nivison, Scott A. [2 ]
How, Jonathan P. [3 ]
Dixon, Warren E. [1 ]
机构
[1] Univ Florida, Dept Mech & Aerosp Engn, Gainesville, FL 32611 USA
[2] Air Force Res Lab, Munit Directorate, Eglin AFB, FL USA
[3] MIT, Dept Aeronaut & Astronaut, Cambridge, MA 02139 USA
来源
2021 AMERICAN CONTROL CONFERENCE (ACC) | 2021年
关键词
SYSTEMS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper provides an approximate online adaptive solution to the infinite-horizon optimal tracking problem for a set of agents with homogeneous dynamics and common tracking objectives. Model-based reinforcement learning is implemented by simultaneously evaluating the Bellman error (BE) at the state of each agent and on nearby off-trajectory points, as needed, throughout the state space. Each agent will calculate and share their respective on and off-trajectory BE information with a centralized estimator, which computes updates for the approximate solution to the infinite-horizon optimal tracking problem and shares the estimate with the agents. In doing so, the computational burden associated with BE extrapolation is shared between the agents and a centralized updating resource. Edge computing is leveraged to share the computational load between the agents and a centralized resource. Uniformly ultimately bounded tracking of each agent's state to the desired state and convergence of the control policy to the neighborhood of the optimal policy is proven via a Lyapunov-like stability analysis.
引用
收藏
页码:1973 / 1978
页数:6
相关论文
共 50 条
  • [1] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [2] Model-based reinforcement learning for nonlinear optimal control with practical asymptotic stability guarantees
    Kim, Yeonsoo
    Lee, Jong Min
    AICHE JOURNAL, 2020, 66 (10)
  • [3] Safe model-based reinforcement learning for nonlinear optimal control with state and input constraints
    Kim, Yeonsoo
    Kim, Jong Woo
    AICHE JOURNAL, 2022, 68 (05)
  • [4] Model-Based Reinforcement Learning in Differential Graphical Games
    Kamalapurkar, Rushikesh
    Klotz, Justin R.
    Walters, Patrick
    Dixon, Warren E.
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2018, 5 (01): : 423 - 433
  • [5] Cognitive Control Predicts Use of Model-based Reinforcement Learning
    Otto, A. Ross
    Skatova, Anya
    Madlon-Kay, Seth
    Daw, Nathaniel D.
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (02) : 319 - 333
  • [6] Adaptive optimal trajectory tracking control of AUVs based on reinforcement learning
    Li, Zhifu
    Wang, Ming
    Ma, Ge
    ISA TRANSACTIONS, 2023, 137 : 122 - 132
  • [7] Model-Based Reinforcement Learning via Stochastic Hybrid Models
    Abdulsamad, Hany
    Peters, Jan
    IEEE OPEN JOURNAL OF CONTROL SYSTEMS, 2023, 2 : 155 - 170
  • [8] From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning
    Decker, Johannes H.
    Otto, A. Ross
    Daw, Nathaniel D.
    Hartley, Catherine A.
    PSYCHOLOGICAL SCIENCE, 2016, 27 (06) : 848 - 858
  • [9] Advances in model-based reinforcement learning for Adaptive Optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Helin, Tapio
    Heritier, Cedric T.
    Rajani, Chang
    ADAPTIVE OPTICS SYSTEMS VIII, 2022, 12185
  • [10] Laboratory experiments of model-based reinforcement learning for adaptive optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Rajani, Chang
    Helin, Tapio
    Heritier, Cedric T.
    Quanz, Sascha P.
    Glauser, Adrian M.
    JOURNAL OF ASTRONOMICAL TELESCOPES INSTRUMENTS AND SYSTEMS, 2024, 10 (01)