Reinforcement Learning Approach to Autonomous PID Tuning

被引:0
|
作者
Dogru, Oguzhan [1 ]
Velswamy, Kirubakaran [1 ]
Ibrahim, Fadi [1 ]
Wu, Yuqi [2 ]
Sundaramoorthy, Arun Senthil [1 ]
Huang, Biao [1 ]
Xu, Shu [3 ]
Nixon, Mark [3 ]
Bell, Noel [3 ]
机构
[1] Univ Alberta, Dept Chem & Mat Engn, Edmonton, AB T6G IH9, Canada
[2] Univ Alberta, Dept Elect & Elect Engn, Edmonton, AB T6G IH9, Canada
[3] Emerson Elect Co, Austin, TX 78681 USA
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many industrial processes utilize proportionalintegral-derivative (PID) controllers due to their practicality and often satisfactory performance. The proper controller parameters depend highly on the operational conditions and process uncertainties. This dependence brings the necessity of frequent tuning for real-time control problems due to process drifts and operational condition changes. This study combines the recent developments in computer sciences and control theory to address the tuning problem. It formulates the PID tuning problem as a reinforcement learning task with constraints. The proposed scheme identifies an initial approximate step-response model and lets the agent learn dynamics off-line from the model with minimal effort. After achieving a satisfactory training performance on the model, the agent is fine-tuned on-line on the actual process to adapt to the real dynamics, thereby minimizing the training time on the real process and avoiding unnecessary wear, which can be beneficial for industrial applications. This sample efficient method is applied to a pilot-scale multi-modal tank system. The performance of the method is demonstrated by setpoint tracking and disturbance regulatory experiments.
引用
收藏
页码:2691 / 2696
页数:6
相关论文
共 50 条
  • [41] Multi-agent reinforcement learning-driven adaptive controller tuning system for autonomous control of wastewater treatment plants: An offline learning approach
    Nam, Kijeon
    Heo, Sungku
    Yoo, Changkyoo
    JOURNAL OF WATER PROCESS ENGINEERING, 2025, 70
  • [42] An Optimal Approach to Online Tuning Method for PID Type Iterative Learning Control
    Memon, Furqan
    Shao, Cheng
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2020, 18 (08) : 1926 - 1935
  • [43] An Optimal Approach to Online Tuning Method for PID Type Iterative Learning Control
    Furqan Memon
    Cheng Shao
    International Journal of Control, Automation and Systems, 2020, 18 : 1926 - 1935
  • [44] An adaptive deep reinforcement learning approach for MIMO PID control of mobile robots
    Carlucho, Ignacio
    De Paula, Mariano
    Acosta, Gerardo G.
    ISA TRANSACTIONS, 2020, 102 : 280 - 294
  • [45] Continuous Reinforcement Learning With Knowledge-Inspired Reward Shaping for Autonomous Cavity Filter Tuning
    Wang, Zhiyang
    Ou, Yongsheng
    Wu, Xinyu
    Feng, Wei
    2018 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS), 2018, : 53 - 58
  • [46] A Novel Entropy-Maximizing TD3-based Reinforcement Learning for Automatic PID Tuning
    Chowdhury, Myisha A.
    Lu, Qiugang
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 2763 - 2768
  • [47] Online Tuning of PID controller using Black Box Multi-Objective Optimization and Reinforcement Learning
    Pandit, Ashwad
    Hingu, Bipin
    IFAC PAPERSONLINE, 2018, 51 (32): : 844 - 849
  • [48] Online Tuning of a PID Controller with a Fuzzy Reinforcement Learning MAS for Flow Rate Control of a Desalination Unit
    Kofinas, Panagiotis
    Dounis, Anastasios, I
    ELECTRONICS, 2019, 8 (02):
  • [49] Study on Optimal Tuning of PID Autopilot for Autonomous Surface Vehicle
    Kobatake, Kanako
    Okazaki, Tadatsugi
    Arima, Masakazu
    IFAC PAPERSONLINE, 2019, 52 (21): : 335 - 340
  • [50] Hyperparameter Tuning in Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 585 - 590