Policy Optimization for H2 Linear Control with H∞ Robustness Guarantee: Implicit Regularization and Global Convergence

被引:0
作者
Zhang, Kaiqing [1 ]
Hu, Bin
Basar, Tamer
机构
[1] Univ Illinois, Dept ECE, Champaign, IL 61820 USA
来源
LEARNING FOR DYNAMICS AND CONTROL, VOL 120 | 2020年 / 120卷
关键词
Reinforcement learning; H-infinity robust control; policy optimization; implicit regularization; global convergence; MIXED H-2/H-INFINITY CONTROL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Policy optimization (PO) is a key ingredient for modern reinforcement learning (RL). For control design, certain constraints are usually enforced on the policies to optimize, accounting for stability, robustness, or safety concerns on the system. Hence, PO is by nature a constrained (non-convex) optimization in most cases, whose global convergence is challenging to analyze in general. More importantly, some constraints that are safety-critical, e.g., the closed-loop stability, or the H-infinity -norm constraint that guarantees the system robustness, can be difficult to enforce on the controller being learned as the PO methods proceed. In this paper, we study the convergence theory of PO for H-infinity linear control with H-infinity robustness guarantee. This general framework includes risk-sensitive linear control as a special case. One significant new feature of this problem, in contrast to the standard H-2 linear control, namely, linear quadratic regulator (LQR) problems, is the lack of coercivity of the cost function. This makes it challenging to guarantee the feasibility, namely, the H-infinity robustness, of the iterates. Interestingly, we propose two PO algorithms that enjoy the implicit regularization property, i.e., the iterates preserve the H-infinity robustness, as if they are regularized by the algorithms. Furthermore, convergence to the globally optimal policies with globally sublinear and locally (super-)linear rates are provided under certain conditions, despite the nonconvexity of the problem. To the best of our knowledge, our work offers the first results on the implicit regularization property and global convergence of PO methods for robust/risk-sensitive control.
引用
收藏
页码:179 / 190
页数:12
相关论文
共 49 条
[31]  
Polyak Boris Teodorovich, 1963, USSR Computational Mathematics and Mathematical Physics, V3, P14
[32]   A Tour of Reinforcement Learning: The View from Continuous Control [J].
Recht, Benjamin .
ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 2, 2019, 2 :253-279
[33]  
Schulman J, 2018, Arxiv, DOI arXiv:1506.02438
[34]  
Schulman J, 2017, Arxiv, DOI [arXiv:1707.06347, 10.48550/arXiv.1707.06347]
[35]  
Schulman J, 2015, PR MACH LEARN RES, V37, P1889
[36]  
Shani L, 2019, Arxiv, DOI arXiv:1909.02769
[37]   Mastering the game of Go with deep neural networks and tree search [J].
Silver, David ;
Huang, Aja ;
Maddison, Chris J. ;
Guez, Arthur ;
Sifre, Laurent ;
van den Driessche, George ;
Schrittwieser, Julian ;
Antonoglou, Ioannis ;
Panneershelvam, Veda ;
Lanctot, Marc ;
Dieleman, Sander ;
Grewe, Dominik ;
Nham, John ;
Kalchbrenner, Nal ;
Sutskever, Ilya ;
Lillicrap, Timothy ;
Leach, Madeleine ;
Kavukcuoglu, Koray ;
Graepel, Thore ;
Hassabis, Demis .
NATURE, 2016, 529 (7587) :484-+
[38]  
Skogestad S., 2007, Multivariable Feedback Control: Analysis and Design, V2
[39]  
Sutton RS, 2000, ADV NEUR IN, V12, P1057
[40]  
Tu SP, 2019, Arxiv, DOI arXiv:1812.03565