Accelerated Gradient Approach For Deep Neural Network-Based Adaptive Control of Unknown Nonlinear Systems

被引:4
作者
Le, Duc M. [1 ]
Patil, Omkar Sudhir [2 ]
Nino, Cristian F. [2 ]
Dixon, Warren E. [2 ]
机构
[1] Aurora Flight Sci, Boeing Co, Cambridge, MA 02142 USA
[2] Univ Florida, Dept Mech & Aerosp Engn, Gainesville, FL 32611 USA
关键词
Artificial neural networks; Adaptation models; Adaptive control; Uncertainty; Training; Real-time systems; Convergence; deep neural networks; Lyapunov methods; nonlinear systems; uncertain systems; REAL-TIME; TRACKING; OPTIMIZATION; FEEDFORWARD;
D O I
10.1109/TNNLS.2024.3395064
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent connections in the adaptive control literature to continuous-time analogs of Nesterov's accelerated gradient method have led to the development of new real-time adaptation laws based on accelerated gradient methods. However, previous results assume that the system's uncertainties are linear-in-the-parameters (LIP). To compensate for non-LIP uncertainties, our preliminary results developed a neural network (NN)-based accelerated gradient adaptive controller to achieve trajectory tracking for nonlinear systems; however, the development and analysis only considered single-hidden-layer NNs. In this article, a generalized deep NN (DNN) architecture with an arbitrary number of hidden layers is considered, and a new DNN-based accelerated gradient adaptation scheme is developed to generate estimates of all the DNN weights in real-time. A nonsmooth Lyapunov-based analysis is used to guarantee the developed accelerated gradient-based DNN adaptation design achieves global asymptotic tracking error convergence for general nonlinear control affine systems subject to unknown (non-LIP) drift dynamics and exogenous disturbances. A comprehensive set of simulation studies are conducted on a two-state nonlinear system, a robotic manipulator, and a complex 20-D nonlinear system to demonstrate the improved performance of the developed method. Our simulation studies demonstrate enhanced tracking and function approximation performance from both DNN architectures and accelerated gradient adaptation.
引用
收藏
页码:6299 / 6313
页数:15
相关论文
共 57 条
[1]  
Attouch H, 2000, LECT NOTES ECON MATH, V48, P25
[2]  
Bernstein D.S., 2005, Matrix Mathematics
[3]   Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction [J].
Boffi, Nicholas M. ;
Slotine, Jean-Jacques E. .
NEURAL COMPUTATION, 2021, 33 (03) :590-673
[4]   An Accelerated Linearly Convergent Stochastic L-BFGS Algorithm [J].
Chang, Daqing ;
Sun, Shiliang ;
Zhang, Changshui .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (11) :3338-3346
[5]  
Clarke F.H., 1990, Optimization and Nonsmooth Analysis
[6]  
Cotter N E, 1990, IEEE Trans Neural Netw, V1, P290, DOI 10.1109/72.80265
[7]   Asymptotic optimal control of uncertain nonlinear Euler-Lagrange systems [J].
Dupree, Keith ;
Patre, Parag M. ;
Wilcox, Zachary D. ;
Dixon, Warren E. .
AUTOMATICA, 2011, 47 (01) :99-107
[8]   A Class of High Order Tuners for Adaptive Systems [J].
Gaudio, Joseph E. ;
Annaswamy, Anuradha M. ;
Bolender, Michael A. ;
Lavretsky, Eugene ;
Gibson, Travis E. .
IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (02) :391-396
[9]   On Adaptive Control With Closed-Loop Reference Models: Transients, Oscillations, and Peaking [J].
Gibson, Travis E. ;
Annaswamy, Anuradha M. ;
Lavretsky, Eugene .
IEEE ACCESS, 2013, 1 :703-717
[10]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1