Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [41] Reinforcement Learning-Based Power Control for Reliable Mission-Critical Wireless Transmission
    Guo, Chongtao
    Li, Zhengchao
    Liang, Le
    Li, Geoffrey Ye
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (23) : 20868 - 20883
  • [42] Constraint based approach for the steady-state simulation of complex systems: Application to ship control
    Larroude, Vincent
    Chenouard, Raphael
    Yvars, Pierre-Alain
    Millet, Dominique
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (01) : 499 - 514
  • [43] Steady-State Gain Identification and Control of Multivariable Unstable Systems
    Ram, V. Dhanya
    Rajapandiyan, C.
    Chidambaram, M.
    CHEMICAL ENGINEERING COMMUNICATIONS, 2015, 202 (02) : 151 - 162
  • [44] Reinforcement Learning-Based Sensor Access Control for WBANs
    Chen, Guihong
    Zhan, Yiju
    Sheng, Geyi
    Xiao, Liang
    Wang, Yonghua
    IEEE ACCESS, 2019, 7 : 8483 - 8494
  • [45] An Improved Current Control Strategy Based on Particle Swarm Optimization and Steady-State Error Correction for SAPF
    Cao, Wu
    Liu, Kangli
    Wu, Mumu
    Xu, Sheng
    Zhao, Jianfeng
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2019, 55 (04) : 4268 - 4274
  • [46] Deep reinforcement learning-based attitude control for spacecraft using control moment gyros
    Oghim, Snyoll
    Park, Junwoo
    Bang, Hyochoong
    Leeghim, Henzeh
    ADVANCES IN SPACE RESEARCH, 2025, 75 (01) : 1129 - 1144
  • [47] A Reinforcement Learning-Based Control Approach for Unknown Nonlinear Systems with Persistent Adversarial Inputs
    Zhong, Xiangnan
    He, Haibo
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [48] A reinforcement learning-based scheme for direct adaptive optimal control of linear stochastic systems
    Wong, Wee Chin
    Lee, Jay H.
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2010, 31 (04) : 365 - 374
  • [49] Enhanced Model Predictive Control Using State Variable Feedback for Steady-State Error Cancellation
    Andreu, Marcos
    Rohten, Jaime
    Espinoza, Jose
    Silva, Jose
    Pulido, Esteban
    Leon, Lesyani
    SENSORS, 2024, 24 (18)
  • [50] Reinforcement learning-based optimal control for Markov jump systems with completely unknown dynamics
    Shi, Xiongtao
    Li, Yanjie
    Du, Chenglong
    Chen, Chaoyang
    Zong, Guangdeng
    Gui, Weihua
    AUTOMATICA, 2025, 171