Reinforcement Q-learning based flight control for a passenger aircraft under actuator fault

被引:0
|
作者
Navid Mohammadi [1 ]
Moein Ebrahimi [1 ]
Morteza Tayefi [1 ]
Amirali Nikkhah [1 ]
机构
[1] Institute of Intelligent Control Systems, K. N. Toosi University of Technology, Tehran
来源
Discover Mechanical Engineering | / 4卷 / 1期
关键词
Airplane; Attitude control; Fault; Q-learning; Reinforcement learning;
D O I
10.1007/s44245-025-00090-x
中图分类号
学科分类号
摘要
This paper discusses the design of a flight control system for an airplane whose aerodynamic elevators are damaged. An optimal control utilizing a Q-learning algorithm is designed to achieve the appropriate behavior of the airplane. For this purpose, the linear dynamic model in the cruise flight of the airplane is augmented with the actuator dynamics, incorporating two tunable parameters: the control gain and the actuator time delay into the airplane dynamics. The control coefficients are computed using the Q-learning algorithm for different modes to evaluate the scenarios with healthy and faulty actuator cases. The numerical results in comparison with LQR controller highlight the potential of Q-learning as a practical approach to designing controllers for passenger airplanes, considering fault conditions. Moreover, the key advantage of the proposed algorithm compared to model-based controllers such as LQR is that this Q-learning control strategy is model-free and robust to path tracking, uncertainty, and disturbances. © The Author(s) 2025.
引用
收藏
相关论文
共 50 条
  • [41] Ramp Metering Control Based on the Q-Learning Algorithm
    Ivanjko, Edouard
    Necoska, Daniela Koltovska
    Greguric, Martin
    Vujic, Miroslav
    Jurkovic, Goran
    Mandzuka, Sadko
    CYBERNETICS AND INFORMATION TECHNOLOGIES, 2015, 15 (05) : 88 - 97
  • [42] FARANE-Q: Fast Parallel and Pipeline Q-Learning Accelerator for Configurable Reinforcement Learning SoC
    Sutisna, Nana
    Ilmy, Andi M. Riyadhus
    Syafalni, Infall
    Mulyawan, Rahmat
    Adiono, Trio
    IEEE ACCESS, 2023, 11 : 144 - 161
  • [43] Output Feedback Reinforcement Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem
    Rizvi, Syed Ali Asad
    Lin, Zongli
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [44] Q-LEARNING, POLICY ITERATION AND ACTOR-CRITIC REINFORCEMENT LEARNING COMBINED WITH METAHEURISTIC ALGORITHMS IN SERVO SYSTEM CONTROL
    Zamfirache, Iuliu Alexandru
    Precup, Radu-Emil
    Petriu, Emil M.
    FACTA UNIVERSITATIS-SERIES MECHANICAL ENGINEERING, 2023, 21 (04) : 615 - 630
  • [45] Reinforcement learning tracking control of aircraft attitude
    Shen Chao
    Jing Yuan-wei
    Proceedings of the 2007 Chinese Control and Decision Conference, 2007, : 427 - +
  • [46] Reinforcement Q-Learning Control With Reward Shaping Function for Swing Phase Control in a Semi-active Prosthetic Knee
    Hutabarat, Yonatan
    Ekkachai, Kittipong
    Hayashibe, Mitsuhiro
    Kongprawechnon, Waree
    FRONTIERS IN NEUROROBOTICS, 2020, 14
  • [47] Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning agents
    Erdodi, Laszlo
    Sommervoll, Avald Aslaugson
    Zennaro, Fabio Massimo
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 61
  • [48] A Hand Gesture Recognition System Using EMG and Reinforcement Learning: A Q-Learning Approach
    Vasconez, Juan Pablo
    Barona Lopez, Lorena Isabel
    Valdivieso Caraguay, Angel Leonardo
    Cruz, Patricio J.
    Alvarez, Robin
    Benalcazar, Marco E.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 580 - 591
  • [49] Discrete-Time Optimal Control Scheme Based on Q-Learning Algorithm
    Wei, Qinglai
    Liu, Derong
    Song, Ruizhuo
    2016 SEVENTH INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING (ICICIP), 2016, : 125 - 130
  • [50] Reinforcement Learning-Based Tracking Control for a Class of Discrete-Time Systems With Actuator Fault
    Liu, Yingying
    Wang, Zhanshan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (06) : 2827 - 2831