LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deep Reinforcement Learning for Generalizable Field Development Optimization

Photo from wikipedia

The optimization of field development plans (FDPs), which includes optimizing well counts, well locations, and the drilling sequence is crucial in reservoir management because it has a strong impact on… Click to show full abstract

The optimization of field development plans (FDPs), which includes optimizing well counts, well locations, and the drilling sequence is crucial in reservoir management because it has a strong impact on the economics of the project. Traditional optimization studies are scenario specific, and their solutions do not generalize to new scenarios (e.g., new earth model, new price assumption) that were not seen before. In this paper, we develop an artificial intelligence (AI) using deep reinforcement learning (DRL) to address the generalizable field development optimization problem, in which the AI could provide optimized FDPs in seconds for new scenarios within the range of applicability. In the proposed approach, the problem of field development optimization is formulated as a Markov decision process (MDP) in terms of states, actions, environment, and rewards. The policy function, which is a function that maps the current reservoir state to optimal action at the next step, is represented by a deep convolution neural network (CNN). This policy network is trained using DRL on simulation runs of a large number of different scenarios generated to cover a “range of applicability.” Once trained, the DRL AI can be applied to obtain optimized FDPs for new scenarios at a minimum computational cost. While the proposed methodology is general, in this paper, we applied it to develop a DRL AI that can provide optimized FDPs for greenfield primary depletion problems with vertical wells. This AI is trained on more than 3×106 scenarios with different geological structures, rock and fluid properties, operational constraints, and economic conditions, and thus has a wide range of applicability. After it is trained, the DRL AI yields optimized FDPs for new scenarios within seconds. The solutions from the DRL AI suggest that starting with no reservoir engineering knowledge, the DRL AI has developed the intelligence to place wells at “sweet spots,” maintain proper well spacing and well count, and drill early. In a blind test, it is demonstrated that the solution from the DRL AI outperforms that from the reference agent, which is an optimized pattern drilling strategy almost 100% of the time. The DRL AI is being applied to a real field and preliminary results are promising. Because the DRL AI optimizes a policy rather than a plan for one particular scenario, it can be applied to obtain optimized development plans for different scenarios at a very low computational cost. This is fundamentally different from traditional optimization methods, which not only require thousands of runs for one scenario but also lack the ability to generalize to new scenarios.

Keywords: optimization; field; field development; development optimization; new scenarios

Journal Title: SPE Journal
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.