Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [1] Steady-State Error Compensation for Reinforcement Learning with Quadratic Rewards
    Wang, Liyao
    Zheng, Zishun
    Lin, Yuan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1608 - 1613
  • [2] Sim-to-real transfer in reinforcement learning-based, non-steady-state control for chemical plants
    Kubosawa S.
    Onishi T.
    Tsuruoka Y.
    SICE Journal of Control, Measurement, and System Integration, 2022, 15 (01) : 10 - 23
  • [3] Type Number Based Steady-State Error Analysis on Fractional Order Control Systems
    Pan, Jinwen
    Gao, Qing
    Qiu, Jianbin
    Wang, Yong
    ASIAN JOURNAL OF CONTROL, 2017, 19 (01) : 266 - 278
  • [4] Development of Self-Tuning Control System with Fuzzy Compensation of Steady-State Error
    Denisova, Liudmila
    Meshcheryakov, Vitalii
    2018 INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING, APPLICATIONS AND MANUFACTURING (ICIEAM), 2018,
  • [5] Automated synthesis of steady-state continuous processes using reinforcement learning
    Goettl, Quirin
    Grimm, Dominik G.
    Burger, Jakob
    FRONTIERS OF CHEMICAL SCIENCE AND ENGINEERING, 2022, 16 (02) : 288 - 302
  • [6] Tracking control with zero steady-state error for time-delay systems
    Gao, HongWei
    PROCEEDINGS OF FIRST INTERNATIONAL CONFERENCE OF MODELLING AND SIMULATION, VOL IV: MODELLING AND SIMULATION IN BUSINESS, MANAGEMENT, ECONOMIC AND FINANCE, 2008, : 22 - 27
  • [7] Control of magnetic levitation systems with reduced steady-state power losses
    de Queiroz, M. S.
    Pradhananga, S.
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2007, 15 (06) : 1096 - 1102
  • [8] Contrastive State Augmentations for Reinforcement Learning-Based Recommender Systems
    Ren, Zhaochun
    Huang, Na
    Wang, Yidan
    Ren, Pengjie
    Ma, Jun
    Lei, Jiahuan
    Shi, Xinlei
    Luo, Hengliang
    Jose, Joemon
    Xin, Xin
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 922 - 931
  • [9] Reinforcement Learning-based Control System of a Hybrid Power Supply
    Daniel, Francisca
    Rix, Arnold
    2020 INTERNATIONAL SAUPEC/ROBMECH/PRASA CONFERENCE, 2020, : 462 - 467
  • [10] Reinforcement Learning-based Active Disturbance Rejection Control for Nonlinear Systems with Disturbance
    Kong, Xiangyu
    Xia, Yuanqing
    2023 2ND CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, CFASTA, 2023, : 799 - 804