Model-Free Optimal Tracking Control of Nonlinear Input-Affine Discrete-Time Systems via an Iterative Deterministic Q-Learning Algorithm

被引:46
作者
Song, Shijie [1 ]
Zhu, Minglei [1 ]
Dai, Xiaolin [1 ]
Gong, Dawei [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 611731, Peoples R China
基金
芬兰科学院;
关键词
Heuristic algorithms; Q-learning; Nonlinear dynamical systems; Approximation algorithms; Iterative algorithms; Convergence; Artificial neural networks; Adaptive dynamic programming (ADP); neural network (NN); off-policy technique; optimal tracking control (OTC); CONTROL SCHEME; LINEAR-SYSTEMS;
D O I
10.1109/TNNLS.2022.3178746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, a novel model-free dynamic inversion-based Q-learning (DIQL) algorithm is proposed to solve the optimal tracking control (OTC) problem of unknown nonlinear input-affine discrete-time (DT) systems. Compared with the existing DIQL algorithm and the discount factor-based Q-learning (DFQL) algorithm, the proposed algorithm can eliminate the tracking error while ensuring that it is model-free and off-policy. First, a new deterministic Q-learning iterative scheme is presented, and based on this scheme, a model-based off-policy DIQL algorithm is designed. The advantage of this new scheme is that it can avoid the training of unusual data and improve data utilization, thereby saving computing resources. Simultaneously, the convergence and stability of the designed algorithm are analyzed, and the proof that adding probing noise into the behavior policy does not affect the convergence is presented. Then, by introducing neural networks (NNs), the model-free version of the designed algorithm is further proposed so that the OTC problem can be solved without any knowledge about the system dynamics. Finally, three simulation examples are given to demonstrate the effectiveness of the proposed algorithm.
引用
收藏
页码:999 / 1012
页数:14
相关论文
共 50 条
[31]   A New Discrete-Time Iterative Adaptive Dynamic Programming Algorithm Based on Q-Learning [J].
Wei, Qinglai ;
Liu, Derong .
ADVANCES IN NEURAL NETWORKS - ISNN 2015, 2015, 9377 :43-52
[32]   Reinforcement Q-Learning and Non-Zero-Sum Games Optimal Tracking Control for Discrete-Time Linear Multi-Input Systems [J].
Zhao, Jin-Gang .
2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, :277-282
[33]   Model-free optimal control of discrete-time systems with additive and multiplicative noises [J].
Lai, Jing ;
Xiong, Junlin ;
Shu, Zhan .
AUTOMATICA, 2023, 147
[34]   Q-Learning Based Parameter Tuning for Model-free Adaptive Control of Nonlinear Systems [J].
Xu, Liuyong ;
Hao, Shoulin ;
Liu, Tao ;
Zhu, Yong ;
Wang, Haixia ;
Zhang, Jiyan .
2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, :2078-2083
[35]   Evolution-Guided Q-Learning With Dual Swarm Intelligence for Model-Free Optimal Control [J].
Wang, Ding ;
Yuan, Zeqiang ;
Tang, Guohan ;
Wang, Jiangyu ;
Qiao, Junfei .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 :16964-16975
[36]   Model-free distributed optimal control for general discrete-time linear systems using reinforcement learning [J].
Feng, Xinjun ;
Zhao, Zhiyun ;
Yang, Wen .
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (09) :5570-5589
[37]   Tracking Control of Discrete-Time Affine Nonlinear Systems Based on Kernel-HDP Algorithm [J].
Tan, Fuxiao ;
Liu, Derong ;
Guan, Xinping .
2014 11TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2014, :2866-2872
[38]   Model-free aperiodic tracking for discrete-time systems using hierarchical reinforcement learning [J].
Tian, Yingqiang ;
Wan, Haiying ;
Karimi, Hamid Reza ;
Luan, Xiaoli ;
Liu, Fei .
NEUROCOMPUTING, 2024, 609
[39]   Quantized measurements in Q-learning based model-free optimal control [J].
Tiistola, Sini ;
Ritala, Risto ;
Vilkko, Matti .
IFAC PAPERSONLINE, 2020, 53 (02) :1640-1645
[40]   Model-free optimal tracking control for linear discrete-time stochastic systems subject to additive and multiplicative noises [J].
Yin Y.-B. ;
Luo S.-X. ;
Wan T. .
Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2023, 40 (06) :1014-1022