Sampled-data-based adaptive optimal output-feedback control of a 2-degree-of-freedom helicopter

被引:39
作者
Gao, Weinan [1 ]
Huang, Mengzhe [1 ]
Jiang, Zhong-Ping [1 ]
Chai, Tianyou [2 ]
机构
[1] NYU, Tandon Sch Engn, Dept Elect & Comp Engn, Brooklyn, NY 11201 USA
[2] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
adaptive control; optimal control; feedback; helicopters; sampled data systems; aerospace control; dynamic programming; iterative methods; convergence; sampling methods; sampled-data-based adaptive optimal output-feedback control problem; Quanser 2-degree-of-freedom helicopter; output feedback; digital implementation; flight controller; sampled-data-based approximate-adaptive dynamic programming approach; policy iteration algorithm; near-optimal control gain; sampling period; DISCRETE-TIME-SYSTEMS; TRACKING; DESIGN;
D O I
10.1049/iet-cta.2015.0977
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study addresses the adaptive and optimal control problem of a Quanser's 2-degree-of-freedom helicopter via output feedback. In order to satisfy the requirement of digital implementation of flight controller, this study distinguishes itself through proposing a novel sampled-data-based approximate/adaptive dynamic programming approach. A policy iteration algorithm is presented that yields to learn a near-optimal control gain iteratively by input/output data. The convergence of the proposed algorithm is theoretically ensured and the trade-off between the optimality and the sampling period is rigorously studied as well. Finally, the authors show the performance of the proposed algorithm under bounded model uncertainties.
引用
收藏
页码:1440 / 1447
页数:8
相关论文
共 43 条
  • [31] Reinforcement Learning for Partially Observable Dynamic Processes: Adaptive Dynamic Programming Using Measured Output Data
    Lewis, F. L.
    Vamvoudakis, Kyriakos G.
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2011, 41 (01): : 14 - 25
  • [32] Lewis F. L., 2012, OPTIMAL CONTROL
  • [33] Reinforcement Learning and Feedback Control USING NATURAL DECISION METHODS TO DESIGN OPTIMAL ADAPTIVE CONTROLLERS
    Lewis, Frank L.
    Vrabie, Draguna
    Vamvoudakis, Kyriakos G.
    [J]. IEEE CONTROL SYSTEMS MAGAZINE, 2012, 32 (06): : 76 - 105
  • [34] Reinforcement Learning and Adaptive Dynamic Programming for Feedback Control
    Lewis, Frank L.
    Vrabie, Draguna
    [J]. IEEE CIRCUITS AND SYSTEMS MAGAZINE, 2009, 9 (03) : 32 - 50
  • [35] Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design
    Luo, Biao
    Wu, Huai-Ning
    Huang, Tingwen
    Liu, Derong
    [J]. AUTOMATICA, 2014, 50 (12) : 3281 - 3290
  • [36] SAMPLING PERIOD SENSITIVITY OF OPTIMAL SAMPLED DATA LINEAR REGULATOR
    MELZER, SM
    KUO, BC
    [J]. AUTOMATICA, 1971, 7 (03) : 367 - &
  • [37] Adaptive dynamic programming
    Murray, JJ
    Cox, CJ
    Lendaris, GG
    Saeks, R
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2002, 32 (02): : 140 - 153
  • [38] Neural Network-Based Optimal Adaptive Output Feedback Control of a Helicopter UAV
    Nodland, David
    Zargarzadeh, Hassan
    Jagannathan, Sarangapani
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (07) : 1061 - 1073
  • [39] Sanchez LA, 2012, P AMER CONTR CONF, P3857
  • [40] Optimal control of unknown nonaffine nonlinear discrete-time systems based on adaptive dynamic programming
    Wang, Ding
    Liu, Derong
    Wei, Qinglai
    Zhao, Dongbin
    Jin, Ning
    [J]. AUTOMATICA, 2012, 48 (08) : 1825 - 1832