Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction

被引:28
作者
Boffi, Nicholas M. [1 ]
Slotine, Jean-Jacques E. [2 ]
机构
[1] Harvard Univ, John A Paulson Sch Engn & Appl Sci, Cambridge, MA 02138 USA
[2] MIT, Nonlinear Syst Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
ADAPTATION; SYSTEMS; INVARIANCE; IMMERSION;
D O I
10.1162/neco_a_01360
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors-out of the many that will achieve perfect tracking or prediction-for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.
引用
收藏
页码:590 / 673
页数:84
相关论文
共 78 条
[1]   Natural gradient works efficiently in learning [J].
Amari, S .
NEURAL COMPUTATION, 1998, 10 (02) :251-276
[2]  
ANDRIEVSKII BR, 1988, AUTOMAT REM CONTR+, V49, P1533
[3]   Adaptive control of continuous time systems with convex/concave parametrization [J].
Annaswamy, AM ;
Skantze, FP ;
Lohi, AP .
AUTOMATICA, 1998, 34 (01) :33-49
[4]  
[Anonymous], ARXIV170906010
[5]  
[Anonymous], 2018, ARXIV180202547
[6]  
[Anonymous], 2014, ARXIV14126651
[7]  
[Anonymous], 1977, The Feynman lectures on physics
[8]  
[Anonymous], 2019, ARXIV190203694
[9]   Immersion and invariance: A new tool for stabilization and adaptive control of nonlinear systems [J].
Astolfi, A ;
Ortega, R .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2003, 48 (04) :590-606
[10]  
Azizan N., 2019, ARXIV190603830