On Model-free Reinforcement Learning for Switched Linear Systems: A Subspace Clustering Approach

被引:0
|
作者
Li, Hao [1 ]
Chen, Hua [1 ]
Zhang, Wei [1 ]
机构
[1] Ohio State Univ, Dept Elect & Comp Engn, Columbus, OH 43210 USA
来源
2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON) | 2018年
基金
美国国家科学基金会;
关键词
DISCRETE-TIME; TRACKING CONTROL; ALGORITHM;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we study optimal control of switched linear systems using reinforcement learning. Instead of directly applying existing model-free reinforcement learning algorithms, we propose a Q-learning-based algorithm designed specifically for discrete time switched linear systems. Inspired by the analytical results from optimal control literature, the Q function in our algorithm is approximated by a point-wise minimum form of a finite number of quadratic functions. An associated update scheme based on subspace clustering for such an approximation is also developed which preserves the desired structure during the training process. Numerical examples for both low-dimensional and high-dimensional switched linear systems are provided to demonstrate the performance of our algorithm.
引用
收藏
页码:123 / 130
页数:8
相关论文
共 50 条
  • [41] Quantized measurements in Q-learning based model-free optimal control
    Tiistola, Sini
    Ritala, Risto
    Vilkko, Matti
    IFAC PAPERSONLINE, 2020, 53 (02): : 1640 - 1645
  • [42] Model-free optimal chiller loading method based on Q-learning
    Qiu, Shunian
    Li, Zhenhai
    Li, Zhengwei
    Zhang, Xinfang
    SCIENCE AND TECHNOLOGY FOR THE BUILT ENVIRONMENT, 2020, 26 (08) : 1100 - 1116
  • [43] Model-Free Semi-Global Output Regulation for Discrete-Time Linear Systems Subject To Input Amplitude Saturation
    Yang, Yongliang
    Ding, Dawei
    Yin, Yixin
    Wunsch, Donald C., II
    PROCEEDINGS 2018 33RD YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION (YAC), 2018, : 150 - 155
  • [44] Robust Control of Uncertain Linear Systems Based on Reinforcement Learning Principles
    Xu, Dengguo
    Wang, Qinglin
    Li, Yuan
    IEEE ACCESS, 2019, 7 : 16431 - 16443
  • [45] Off-policy reinforcement learning-based novel model-free minmax fault-tolerant tracking control for industrial processes
    Li, Xueyu
    Luo, Qiuwen
    Wang, Limin
    Zhang, Ridong
    Gao, Furong
    JOURNAL OF PROCESS CONTROL, 2022, 115 : 145 - 156
  • [46] A finite-difference scheme to model Switched Complementary Linear Systems
    Gutierrez-Pachas, Daniel A.
    Mazorche, Sandro R.
    2022 XVLIII LATIN AMERICAN COMPUTER CONFERENCE (CLEI 2022), 2022,
  • [47] Model-free motion control of continuum robots based on a zeroing neurodynamic approach
    Tan, Ning
    Yu, Peng
    Zhang, Xinyu
    Wang, Tao
    NEURAL NETWORKS, 2021, 133 : 21 - 31
  • [48] Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning
    Perrusquia, Adolfo
    INFORMATION SCIENCES, 2022, 595 : 364 - 377
  • [49] Efficient Identification of Error-in-Variables Switched Systems via a Sum-of-Squares Polynomial Based Subspace Clustering Method
    Ozbay, B.
    Camps, O.
    Sznaier, M.
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 3429 - 3434
  • [50] Optimal behaviour prediction using a primitive-based data-driven model-free iterative learning control approach
    Radac, Mircea-Bogdan
    Precup, Radu-Emil
    COMPUTERS IN INDUSTRY, 2015, 74 : 95 - 109