Intelligent PID Controller Based on Deep Reinforcement Learning

被引:1
作者
Zhai, Yinhe [1 ]
Zhao, Qiang [2 ]
Han, Yinghua [3 ]
Wang, Jinkuan [1 ]
Zeng, Wenying [1 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang, Peoples R China
[2] Northeastern Univ Qinhuangdao, Sch Control Engn, Qinhuangdao, Hebei, Peoples R China
[3] Northeastern Univ Qinhuangdao, Sch Comp & Commun Engn, Qinhuangdao, Hebei, Peoples R China
来源
2024 8TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION, ICRCA 2024 | 2024年
关键词
intelligent control; RL; PID; DDPG;
D O I
10.1109/ICRCA60878.2024.10649187
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
PID control is still the most important and popular method in industrial control at present. PID control is easy to achieve and it can improve the steady-state performance and dynamic performance of the system. PID controller can be used for all objects, however, it has some problems with parameter adjustment and control effect. The proportional integral differential coefficient of PID control is fixed, and it can't change when disturbed, so the stability of the system will be affected. Moreover, PID control is prone to overshoot and can't be used in specific systems. Reinforcement learning (RL) algorithms have developed rapidly from discrete action to continuous action in recent years. It has aroused the high interest of researchers in the field of automatic control. RL control performs better in the degree of intelligence and dynamic performance, however, the steady-state performance is poor. The sensitive response of RL control will damage the actuator. In this paper, an adaptive PID controller based on deep reinforcement learning is proposed. By designing reward values, the desired control effect is described. In this way, an agent is trained to provide parameters to the PID controller in real time. It can improve the response speed of the system, suppress overshoot, and have a certain anti-disturbance ability by training the agent to achieve real-time PID parameter adjustment.
引用
收藏
页码:343 / 348
页数:6
相关论文
共 25 条
[1]  
[Anonymous], About us
[2]   Robust identification of first-order plus dead-time model from step response [J].
Bi, Q ;
Cai, WJ ;
Lee, EL ;
Wang, QG ;
Hang, CC ;
Zhang, Y .
CONTROL ENGINEERING PRACTICE, 1999, 7 (01) :71-77
[3]   A distributed approach solving partially flexible job-shop scheduling problem with a Q-learning effect [J].
Bouazza, W. ;
Sallez, Y. ;
Beldjilali, B. .
IFAC PAPERSONLINE, 2017, 50 (01) :15890-15895
[4]  
Chai L, 2006, WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, P8558
[5]  
CHARNEY DM, 1991, IEEE IJCNN, P1989, DOI 10.1109/IJCNN.1991.170673
[6]   Research on Architecture of Spacecraft Intelligent Control System [J].
Gong, Jinggang ;
Zhang, Xiaojing ;
Ning, Yu ;
Lv, Nan .
2021 2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND INTELLIGENT CONTROL (ICCEIC 2021), 2021, :172-176
[7]   Online control of stencil printing parameters using reinforcement learning approach [J].
Khader, Nourma ;
Yoon, Sang Won .
28TH INTERNATIONAL CONFERENCE ON FLEXIBLE AUTOMATION AND INTELLIGENT MANUFACTURING (FAIM2018): GLOBAL INTEGRATION OF INTELLIGENT MANUFACTURING AND SMART INDUSTRY FOR GOOD OF HUMANITY, 2018, 17 :94-101
[8]   Design, Implementation and Evaluation of Reinforcement Learning for an Adaptive Order Dispatching in Job Shop Manufacturing Systems [J].
Kuhnle, Andreas ;
Schaefer, Louis ;
Stricker, Nicole ;
Lanza, Gisela .
52ND CIRP CONFERENCE ON MANUFACTURING SYSTEMS (CMS), 2019, 81 :234-239
[9]   Autonomous order dispatching in the semiconductor industry using reinforcement learning [J].
Kuhnle, Andreas ;
Roehrig, Nicole ;
Lanza, Gisela .
12TH CIRP CONFERENCE ON INTELLIGENT COMPUTATION IN MANUFACTURING ENGINEERING, 2019, 79 :391-396
[10]  
Li Xin, 2022, Automation & Instrumentation, DOI 10.14016