Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [21] Automated synthesis of steady-state continuous processes using reinforcement learning
    Gttl Quirin
    Grimm Dominik G
    Burger Jakob
    Frontiers of Chemical Science and Engineering, 2022, 16 (02) : 288 - 302
  • [22] Model-free reinforcement learning-based transient power control of vehicle fuel cell systems
    Zhang, Yahui
    Li, Ganxin
    Tian, Yang
    Wang, Zhong
    Liu, Jinfa
    Gao, Jinwu
    Jiao, Xiaohong
    Wen, Guilin
    APPLIED ENERGY, 2025, 388
  • [23] Reinforcement Learning-Based Control for Nonlinear Discrete-Time Systems with Unknown Control Directions and Control Constraints
    Huang, Miao
    Liu, Cong
    He, Xiaoqi
    Ma, Longhua
    Lu, Zheming
    Su, Hongye
    NEUROCOMPUTING, 2020, 402 : 50 - 65
  • [24] Using the Concept of Type Number to Analyze the Steady-State Error of Nonunity Feedback Control Systems
    Mou, Shann-Chyi
    2011 INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND NEURAL COMPUTING (FSNC 2011), VOL II, 2011, : 393 - 396
  • [25] Verifying and Simulating of the Novel Analytical Method of the Steady-State Error for Nonunity Feedback Control Systems
    Mou, Shann-Chyi
    ADVANCED MEASUREMENT AND TEST, PTS 1-3, 2011, 301-303 : 1670 - 1675
  • [26] Current Control of Grid-Connected Boost Inverter With Zero Steady-State Error
    Zhao, Wei
    Lu, Dylan Dah-Chuan
    Agelidis, Vassilios G.
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2011, 26 (10) : 2825 - 2834
  • [27] Null-space-based steady-state tracking error compensation of simple adaptive control with parallel feedforward compensator and its application to rotation control
    Sato, T.
    Fujita, K.
    Kawaguchi, N.
    Takagi, T.
    Mizumoto, I
    CONTROL ENGINEERING PRACTICE, 2020, 105
  • [28] Bridging Transient and Steady-State Performance in Voltage Control: A Reinforcement Learning Approach With Safe Gradient Flow
    Feng, Jie
    Cui, Wenqi
    Cortes, Jorge
    Shi, Yuanyuan
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 2845 - 2850
  • [29] Reinforcement Learning-Based Decentralized Control for Networked Interconnected Systems With Communication and Control Constraints
    Liu, Jinliang
    Zhang, Nan
    Zha, Lijuan
    Xie, Xiangpeng
    Tian, Engang
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (03) : 4674 - 4685
  • [30] Testing the Plasticity of Reinforcement Learning-based Systems
    Biagiola, Matteo
    Tonella, Paolo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2022, 31 (04)