Adaptive neural network control of nonlinear systems with unknown dynamics

被引:39
作者
Cheng, Lin [1 ]
Wang, Zhenbo [2 ]
Jiang, Fanghua [3 ]
Li, Junfeng [3 ]
机构
[1] Beihang Univ, Sch Astronaut, Beijing, Peoples R China
[2] Univ Tennessee, Dept Mech Aerosp & Biomed Engn, Knoxville, TN 37996 USA
[3] Tsinghua Univ, Sch Aerosp Engn, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Unknown dynamics; Input-output linearization; Extended state observation; Iterative control learning; Adaptive network control; TIME; DESIGN;
D O I
10.1016/j.asr.2020.10.052
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
In this study, an adaptive neural network control approach is proposed to achieve accurate and robust control of nonlinear systems with unknown dynamics, wherein the neural network is innovatively used to learn the inverse problem of system dynamics with guaranteed convergence. This study focuses on the following three contributions. First, the considered system is transformed into a multi-integrator system using an input-output linearization technique, and an extended state observation technique is used to identify the transformed states. Second, an iterative control learning algorithm is proposed to achieve the neural network training, and stability analysis is given to prove that the network's predictions converge to ideal control inputs with guaranteed convergence. Third, an adaptive neural network controller is developed by combining the trained network and a proportional-integral controller, and the long-standing challenge of model-based methods for control determination of unknown dynamics is resolved. Simulation results of a virtual control mission and an aerospace altitude tracking mission are provided to substantiate the effectiveness of the proposed techniques and illustrate the adaptability and robustness of the proposed controller. (C) 2020 COSPAR. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:1114 / 1123
页数:10
相关论文
共 27 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Cao M., 2019, IEEE T NEURAL NETWOR
[3]   Incremental Q-learning strategy for adaptive PID control of mobile robots [J].
Carlucho, Ignacio ;
De Paula, Mariano ;
Villar, Sebastian A. ;
Acosta, Gerardo G. .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 80 :183-199
[4]   Multiconstrained Real-Time Entry Guidance Using Deep Neural Networks [J].
Cheng, Lin ;
Jiang, Fanghua ;
Wang, Zhenbo ;
Li, Junfeng .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2021, 57 (01) :325-340
[5]   Fast Generation of Optimal Asteroid Landing Trajectories Using Deep Neural Networks [J].
Cheng, Lin ;
Wang, Zhenbo ;
Jiang, Fanghua ;
Li, Junfeng .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2020, 56 (04) :2642-2655
[6]   Real-time control for fuel-optimal Moon landing based on an interactive deep reinforcement learning algorithm [J].
Cheng, Lin ;
Wang, Zhenbo ;
Jiang, Fanghua .
ASTRODYNAMICS, 2019, 3 (04) :375-386
[7]   Real-Time Optimal Control for Spacecraft Orbit Transfer via Multiscale Deep Neural Networks [J].
Cheng, Lin ;
Wang, Zhenbo ;
Jiang, Fanghua ;
Zhou, Chengyang .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2019, 55 (05) :2436-2450
[8]  
Gao ZQ, 2006, P AMER CONTR CONF, V1-12, P2399
[9]   Neural Adaptive Backstepping Control of a Robotic Manipulator With Prescribed Performance Constraint [J].
Guo, Qing ;
Zhang, Yi ;
Celler, Branko G. ;
Su, Steven W. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (12) :3572-3583
[10]   From PID to Active Disturbance Rejection Control [J].
Han, Jingqing .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2009, 56 (03) :900-906