Explainable machine learning for hydrocarbon prospect risking

被引:0
作者
Mustafa A. [1 ]
Koster K. [2 ]
Alregib G. [1 ]
机构
[1] Georgia Institute of Technology, Omni Lab for Intelligent Visual Engineering and Science (OLIVES), School of Electrical and Computer Engineering, Atlanta, GA
[2] Occidental Petroleum, Houston, TX
关键词
attributes; interpretation; machine learning;
D O I
10.1190/geo2022-0594.1
中图分类号
学科分类号
摘要
Hydrocarbon prospect risk assessment is an important process in oil and gas exploration involving the integrated analysis of various geophysical data modalities, including seismic data, well logs, and geologic information, to estimate the likelihood of drilling success for a given drill location. Over the years, geophysicists have attempted to understand the various factors at play influencing the probability of success for hydrocarbon prospects. Toward this end, a large database of prospect drill outcomes and associated attributes has been collected and analyzed via correlation-based techniques to determine the features that contribute the most in deciding the final outcome. Machine learning (ML) has the potential to model complex feature interactions to learn input-output mappings for complicated high-dimensional data sets. However, in many instances, ML models are not interpretable to end users, limiting their utility toward understanding the underlying scientific principles for the problem domain and being deployed to assist in the risk assessment process. In this context, we leverage the concept of explainable ML to interpret various black-box ML models trained on the aforementioned prospect database for risk assessment. Using various case studies on real data, we determine that this model-agnostic explainability analysis for prospect risking can (1) reveal novel scientific insights into the interplay of various features in regard to deciding prospect outcome, (2) assist with performing feature engineering for ML models, (3) detect bias in data sets involving spurious correlations, and (4) build a global picture of a model's understanding of the data by aggregating local explanations on individual data points. © 2023 Society of Exploration Geophysicists.
引用
收藏
页码:WA13 / WA24
页数:11
相关论文
共 59 条
  • [1] Alaudah Y., Alfarraj M., Alregib G., Structure label prediction using similarity-based retrieval and weakly supervised label mapping, Geophysics, 84, 1, pp. V67-V79, (2019)
  • [2] Alaudah Y., Michalowicz P., Alfarraj M., Alregib G., A machine-learning benchmark for facies classification, Interpretation, 7, 3, pp. SE175-SE187, (2019)
  • [3] Alaudah Y., Soliman M., Alregib G., Facies classification with weak and strong supervision: A comparative study, 89th Annual International Meeting, SEG, pp. 1868-1872, (2019)
  • [4] Alfarraj M., Alaudah Y., Long Z., Alregib G., Multiresolution analysis and learning for computational seismic interpretation, The Leading Edge, 37, pp. 443-450, (2018)
  • [5] Alfarraj M., Alregib G., Semisupervised sequence modeling for elastic impedance inversion, Interpretation, 7, 3, pp. SE237-SE249, (2019)
  • [6] Alregib G., Prabhushankar M., Explanatory Paradigms in Neural Networks, (2022)
  • [7] Amin A., Deriche M., Shafiq M.A., Wang Z., Alregib G., Automated saltdome detection using an attribute ranking framework with a dictionary-based classifier, Interpretation, 5, 3, pp. SJ61-SJ79, (2017)
  • [8] Bibal A., Lognoul M., De Streel A., Frenay B., Legal requirements on explainability in machine learning, Artificial Intelligence and Law, 29, pp. 149-169, (2020)
  • [9] Biswas R., Sen M.K., Das V., Mukerji T., Prestack and poststack inversion using a physics-guided convolutional neural network, Interpretation, 7, 3, pp. SE161-SE174, (2019)
  • [10] Di H., Alfarraj M., Alregib G., Three-dimensional curvature analysis of seismic waveforms and its interpretational implications, Geophysical Prospecting, 67, pp. 265-281, (2018)