Optimization control of the double-capacity water tank-level system using the deep deterministic policy gradient algorithm

被引:6
作者
Ye, Likun [1 ]
Jiang, Pei [2 ]
机构
[1] South China Univ Technol, Shien Ming Wu Sch Intelligent Engn, Guangzhou 510640, Guangdong, Peoples R China
[2] Beihang Univ, Sch Instrument Sci & Optoelect Engn, Beijing, Peoples R China
关键词
DDPG pure control; process control system; reinforcement learning; DDPG adaptive compensation control; REINFORCEMENT LEARNING-METHOD;
D O I
10.1002/eng2.12668
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Process control systems are subject to external factors such as changes in working conditions and perturbation interference, which can significantly affect the system's stability and overall performance. The application and promotion of intelligent control algorithms with self-learning, self-optimization, and self-adaption characteristics have thus become a challenging yet meaningful research topic. In this article, we propose a novel approach that incorporates the deep deterministic policy gradient (DDPG) algorithm into the control of double-capacity water tanklevel system. Specifically, we introduce a fully connected layer on the observer side of the critic network to enhance its expression capability and processing efficiency, allowing for the extraction of important features for water-level control. Additionally, we optimize the node parameters of the neural network and use the RELU activation function to ensure the network's ability to continuously observe and learn from the external water tank environment while avoiding the issue of vanishing gradients. We enhance the system's feedback regulation ability by adding the PID controller output to the observer input based on the liquid level deviation and height. This integration with the DDPG control method effectively leverages the benefits of both, resulting in improved robustness and adaptability of the system. Experimental results show that our proposed model outperforms traditional control methods in terms of convergence, tracking, anti-disturbance and robustness performances, highlighting its effectiveness in improving the stability and precision of double-capacity water tank systems.
引用
收藏
页数:16
相关论文
共 27 条
[1]   Adoption of reinforcement learning for the intelligent control of a microfluidic peristaltic pump [J].
Abe, Takaaki ;
Oh-hara, Shinsuke ;
Ukita, Yoshiaki .
BIOMICROFLUIDICS, 2021, 15 (03)
[2]   Improving Quality of Experience Using Fuzzy Controller for Smart Homes [J].
Ain, Qurat-Ul ;
Iqbal, Sohail ;
Mukhtar, Hamid .
IEEE ACCESS, 2022, 10 :11892-11908
[3]   Robustness and performance of Deep Reinforcement Learning [J].
Al-Nima, Raid Rafi Omar ;
Han, Tingting ;
Al-Sumaidaee, Saadoon Awad Mohammed ;
Chen, Taolue ;
Woo, Wai Lok .
APPLIED SOFT COMPUTING, 2021, 105
[4]  
[Anonymous], 2021, P MACHINE LEARNING R, V139
[5]  
[柴天佑 Chai Tianyou], 2020, [自动化学报, Acta Automatica Sinica], V46, P2005
[6]   Multiagent DDPG-Based Joint Task Partitioning and Power Control in Fog Computing Networks [J].
Cheng, Zhipeng ;
Min, Minghui ;
Liwang, Minghui ;
Huang, Lianfen ;
Gao, Zhibin .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (01) :104-116
[7]  
Dey N., 2013, J ELECTR ENG TECHNOL, V5, P2277
[8]   A Novel Efficacious PID Controller for Processes With Inverse Response and Time Delay [J].
Divakar, Kuruna ;
Kumar, M. Praveen .
IEEE ACCESS, 2022, 10 :63626-63639
[9]   Comparison of Model Predictive Control and PID Controller in Real Time Process Control System [J].
Efleij, Hafed ;
Abagul, Abdulgi ;
AmmarAlbraki, Nabela .
2019 19TH INTERNATIONAL CONFERENCE ON SCIENCES AND TECHNIQUES OF AUTOMATIC CONTROL AND COMPUTER ENGINEERING (STA), 2019, :64-69
[10]   Efficient Load Frequency Control of Renewable Integrated Power System: A Twin Delayed DDPG-Based Deep Reinforcement Learning Approach [J].
Khalid, Junaid ;
Ramli, Makbul A. M. ;
Khan, Muhammad Saud ;
Hidayat, Taufal .
IEEE ACCESS, 2022, 10 :51561-51574