Adaptive Regulator for Natural Gas Transportation based on DDPG

被引:0
作者
Liu, Yingjian [1 ]
Zhang, Xiaodong [1 ]
机构
[1] China Univ Petr East China, Qingdao 266000, Peoples R China
来源
2024 43RD CHINESE CONTROL CONFERENCE, CCC 2024 | 2024年
关键词
Adaptive regulator; DDPG controller; Pressure and flow control; Reinforcement learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose an adaptive regulator for natural gas transportation based on Deep Deterministic Policy Gradient (DDPG). Currently, most natural gas pipeline pressure and flow control rely on traditional control methods. However, the transportation issues of natural gas pipelines are subject to frequent environmental changes. For instance, variations in factors such as temperature and density of natural gas can necessitate cumbersome readjustment and design of traditional controllers. While the DDPG controller can automatically learn and optimize the control strategy during the training process by setting appropriate performance indicators. This makes the DDPG controller more convenient and flexible, suitable for a wide range of control tasks. The paper constructs a model of the natural gas pipeline environment, designs an appropriate control scheme, and simultaneously develops a DDPG controller, along with relevant performance indicators. Through the established natural gas pipeline model, simulation experiments verify the effectiveness of the DDPG control algorithm.
引用
收藏
页码:2727 / 2731
页数:5
相关论文
共 16 条
[1]   A state space model for transient flow simulation in natural gas pipelines [J].
Alamian, R. ;
Behbahani-Nejad, M. ;
Ghanbarzadeh, A. .
JOURNAL OF NATURAL GAS SCIENCE AND ENGINEERING, 2012, 9 :51-59
[2]  
[Anonymous], 2014, PMLR
[3]   The accuracy and efficiency of a MATLAB-Simulink library for transient flow simulation of gas pipelines and networks [J].
Behbahani-Nejad, M. ;
Bagheri, A. .
JOURNAL OF PETROLEUM SCIENCE AND ENGINEERING, 2010, 70 (3-4) :256-265
[4]  
Ioffe Sergey, 2015, Proceedings of Machine Learning Research, V37, P448, DOI DOI 10.48550/ARXIV.1502.03167
[5]  
Kendall A, 2019, IEEE INT CONF ROBOT, P8248, DOI [10.1109/ICRA.2019.8793742, 10.1109/icra.2019.8793742]
[6]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[7]   Deep reinforcement learning in smart manufacturing: A review and prospects [J].
Li, Chengxi ;
Zheng, Pai ;
Yin, Yue ;
Wang, Baicun ;
Wang, Lihui .
CIRP JOURNAL OF MANUFACTURING SCIENCE AND TECHNOLOGY, 2023, 40 :75-101
[8]  
Lillicrap Timothy P., 2015, INT C LEARNING REPRE
[9]  
Mnih V, 2016, PR MACH LEARN RES, V48
[10]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533