A reinforcement learning approach for waterflooding optimization in petroleum reservoirs

被引:42
作者
Hourfar, Farzad [1 ]
Bidgoly, Hamed Jalaly [1 ]
Moshiri, Behzad [1 ]
Salahshoor, Karim [2 ]
Elkamel, Ali [3 ,4 ]
机构
[1] Univ Tehran, CIPCE, Sch Elect & Comp Engn, Tehran, Iran
[2] Petr Univ Technol, Dept Automat & Instrumentat Engn, Ahvaz, Iran
[3] Univ Waterloo, Dept Chem Engn, Waterloo, ON, Canada
[4] Khalifa Univ, Petr Inst, Dept Chem Engn, Abu Dhabi, U Arab Emirates
关键词
Waterflooding process; Reinforcement learning; Production optimization; Closed-loop reservoir management; Derivative-free optimization; MULTIPHASE FLOW; SUBSURFACE FLOW; TERM PRODUCTION; POROUS-MEDIA; OIL-FIELD; LONG-TERM; MANAGEMENT; MODEL; TIME; PERFORMANCE;
D O I
10.1016/j.engappai.2018.09.019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Waterflooding optimization in closed-loop management of the oil reservoirs is always considered as a challenging issue due to the complicated and unpredicted dynamics of the process. The main goal in waterflooding is to adjust the manipulated variables such that the total oil production or a defined objective function, which has a strong correlation with the gained financial profit, is maximized. Fortunately, due to the recent progresses in the computational tools and also expansion of the calculating facilities, utilization of non-conventional optimization methods is feasible to achieve the desired goals. In this paper, waterflooding optimization problem has been defined and formulated in the framework of Reinforcement Learning (RL) methodology, which is known as a derivative-free and also model-free optimization approach. This technique prevents from the challenges corresponding with the complex gradient calculations for handling the objective functions. So, availability of explicit dynamic models of the reservoir for gradient computations is not mandatory to apply the proposed method. The developed algorithm provides the facility to achieve the desired operational targets, by appropriately defining the learning problem and the necessary variables. The fundamental learning elements such as actions, states, and rewards have been delineated both in discrete and continuous domain. The proposed methodology has been implemented and assessed on the Egg-model which is a popular and well-known reservoir case study. Different configurations for active injection and production wells have been taken into account to simulate Single-Input-Multi-Output (SIMO) as well as Multi-Input-Multi-Output (MIMO) optimization scenarios. The results demonstrate that the "agent" is able to gradually, but successfully learn the most appropriate sequence of actions tailored for each practical scenario. Consequently, the manipulated variables (actions) are set optimally to satisfy the defined production objectives which are generally dictated by the management level or even contractual obligations. Moreover, it has been shown that by properly adjustment of the rewarding policies in the learning process, diverse forms of multi-objective optimization problems can be formulated, analyzed and solved.
引用
收藏
页码:98 / 116
页数:19
相关论文
共 83 条
[61]   Optimal planning of oil and gas development projects considering long-term production and transmission [J].
Shakhsi-Niaei, M. ;
Iranmanesh, S. H. ;
Torabi, S. A. .
COMPUTERS & CHEMICAL ENGINEERING, 2014, 65 :67-80
[62]   Integrated Airline Schedule Design and Fleet Assignment: Polyhedral Analysis and Benders' Decomposition Approach [J].
Sherali, Hanif D. ;
Bae, Ki-Hwan ;
Haouari, Mohamed .
INFORMS JOURNAL ON COMPUTING, 2010, 22 (04) :500-513
[63]  
Sincock K J., 1988, SPE ANN TECHN C EXH
[64]   Robust optimization of water-flooding in oil reservoirs using risk management tools [J].
Siraj, M. Mohsin ;
van den Hof, Paul M. J. ;
Jansen, Jan Dirk .
IFAC PAPERSONLINE, 2016, 49 (07) :133-138
[65]  
Souza SA., 2010, 31 IB LAT AM C COMP, P15
[66]  
Sutton R. S., 1998, Reinforcement Learning: An Introduction, V2
[67]  
Sutton RS, 1996, ADV NEUR IN, V8, P1038
[68]  
Suwartadi E., 2012, THESIS
[69]   FUZZY IDENTIFICATION OF SYSTEMS AND ITS APPLICATIONS TO MODELING AND CONTROL [J].
TAKAGI, T ;
SUGENO, M .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1985, 15 (01) :116-132
[70]  
van Eck N. J., 2004, REINFORCEMENT LEARNI