On Model-free Reinforcement Learning for Switched Linear Systems: A Subspace Clustering Approach

被引:0
|
作者
Li, Hao [1 ]
Chen, Hua [1 ]
Zhang, Wei [1 ]
机构
[1] Ohio State Univ, Dept Elect & Comp Engn, Columbus, OH 43210 USA
来源
2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON) | 2018年
基金
美国国家科学基金会;
关键词
DISCRETE-TIME; TRACKING CONTROL; ALGORITHM;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we study optimal control of switched linear systems using reinforcement learning. Instead of directly applying existing model-free reinforcement learning algorithms, we propose a Q-learning-based algorithm designed specifically for discrete time switched linear systems. Inspired by the analytical results from optimal control literature, the Q function in our algorithm is approximated by a point-wise minimum form of a finite number of quadratic functions. An associated update scheme based on subspace clustering for such an approximation is also developed which preserves the desired structure during the training process. Numerical examples for both low-dimensional and high-dimensional switched linear systems are provided to demonstrate the performance of our algorithm.
引用
收藏
页码:123 / 130
页数:8
相关论文
共 50 条
  • [31] AdaPool: A Diurnal-Adaptive Fleet Management Framework Using Model-Free Deep Reinforcement Learning and Change Point Detection
    Haliem, Marina
    Aggarwal, Vaneet
    Bhargava, Bharat
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (03) : 2471 - 2481
  • [32] Stability Analysis of Networked Control Systems Using a Switched Linear Systems Approach
    Donkers, M. C. F.
    Heemels, W. P. M. H.
    van de Wouw, Nathan
    Hetel, Laurentiu
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2011, 56 (09) : 2101 - 2115
  • [33] Observer-Based Adaptive Optimized Control for Uncertain Cyclic Switched Nonlinear Systems: Reinforcement Learning Algorithm Approach
    Yan, Chengyuan
    Xia, Jianwei
    Park, Ju H.
    Sun, Wei
    Xie, Xiangpeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024, 71 (05) : 2203 - 2216
  • [34] Model-Free Q-Learning-Based Adaptive Optimal Control for Wheeled Mobile Robot
    Duc, Cuong Nguyen
    Pham, Sen Huong Thi
    Vu, Nga Thi-Thuy
    JOURNAL OF CONTROL AUTOMATION AND ELECTRICAL SYSTEMS, 2025, 36 (01) : 86 - 100
  • [35] Identification of state-space switched linear systems using clustering and hybrid filtering
    Lopes, Renato V.
    Ishihara, Joao Y.
    Borges, Geovany A.
    JOURNAL OF THE BRAZILIAN SOCIETY OF MECHANICAL SCIENCES AND ENGINEERING, 2017, 39 (02) : 565 - 573
  • [36] A novel model-free robust saturated reinforcement learning-based controller for quadrotors guaranteeing prescribed transient and steady state performance
    Elhaki, Omid
    Shojaei, Khoshnam
    AEROSPACE SCIENCE AND TECHNOLOGY, 2021, 119
  • [37] Iterative Learning Model-Free Control for Networked Systems With Dual-Direction Data Dropouts and Actuator Faults
    Chen, Jiannan
    Hua, Changchun
    Guan, Xinping
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (11) : 5232 - 5240
  • [38] A game model for semi-supervised subspace clustering with dynamic affinity and label learning
    Qi, Tingting
    Feng, Xiangchu
    Wang, Weiwei
    SIGNAL PROCESSING, 2024, 220
  • [39] An Energy Efficient Reinforcement Learning Based Clustering Approach for Wireless Sensor Network
    Kaur, Navpreet
    Aulakh, Inderdeep Kaur
    EAI ENDORSED TRANSACTIONS ON SCALABLE INFORMATION SYSTEMS, 2021, 8 (31) : 1 - 17
  • [40] Adaptive Suboptimal Output-Feedback Control for Linear Systems Using Integral Reinforcement Learning
    Zhu, Lemei M.
    Modares, Hamidreza
    Peen, Gan Oon
    Lewis, Frank L.
    Yue, Baozeng
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2015, 23 (01) : 264 - 273