How to improve machine learning models for lithofacies identification by practical and novel ensemble strategy and principles

被引:22
作者
Dong, Shao-Qun [1 ,2 ]
Sun, Yan-Ming [1 ,2 ]
Xu, Tao [1 ,2 ]
Zeng, Lian-Bo [1 ,3 ]
Du, Xiang-Yi [1 ,3 ]
Yang, Xu [1 ,2 ]
Liang, Yu [1 ,2 ]
机构
[1] China Univ Petr, State Key Lab Petr Resources & Prospecting, Beijing 102249, Peoples R China
[2] China Univ Petr, Coll Sci, Beijing 102249, Peoples R China
[3] China Univ Petr, Coll Geosci, Beijing 102249, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Lithofacies identification; Machine learning; Ensemble learning strategy; Ensemble principle; Homogeneous ensemble; Heterogeneous ensemble; LITHOLOGY IDENTIFICATION; DISCRIMINANT-ANALYSIS; PREDICTION; FACIES; BASIN; FIELD; ZONE;
D O I
10.1016/j.petsci.2022.09.006
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Typically, relationship between well logs and lithofacies is complex, which leads to low accuracy of lithofacies identification. Machine learning (ML) methods are often applied to identify lithofacies using logs labelled by rock cores. However, these methods have accuracy limits to some extent. To further improve their accuracies, practical and novel ensemble learning strategy and principles are proposed in this work, which allows geologists not familiar with ML to establish a good ML lithofacies identification model and help geologists familiar with ML further improve accuracy of lithofacies identification. The ensemble learning strategy combines ML methods as sub-classifiers to generate a comprehensive lith-ofacies identification model, which aims to reduce the variance errors in prediction. Each sub-classifier is trained by randomly sampled labelled data with random features. The novelty of this work lies in the ensemble principles making sub-classifiers just overfitting by algorithm parameter setting and sub-dataset sampling. The principles can help reduce the bias errors in the prediction. Two issues are dis-cussed, videlicet (1) whether only a relatively simple single-classifier method can be as sub-classifiers and how to select proper ML methods as sub-classifiers; (2) whether different kinds of ML methods can be combined as sub-classifiers. If yes, how to determine a proper combination. In order to test the effectiveness of the ensemble strategy and principles for lithofacies identification, different kinds of machine learning algorithms are selected as sub-classifiers, including regular classifiers (LDA, NB, KNN, ID3 tree and CART), kernel method (SVM), and ensemble learning algorithms (RF, AdaBoost, XGBoost and LightGBM). In this work, the experiments used a published dataset of lithofacies from Daniudi gas field (DGF) in Ordes Basin, China. Based on a series of comparisons between ML algorithms and their corresponding ensemble models using the ensemble strategy and principles, conclusions are drawn: (1) not only decision tree but also other single-classifiers and ensemble-learning-classifiers can be used as sub-classifiers of homogeneous ensemble learning and the ensemble can improve the accuracy of the original classifiers; (2) the ensemble principles for the introduced homogeneous and heterogeneous ensemble strategy are effective in promoting ML in lithofacies identification; (3) in practice, heterogeneous ensemble is more suitable for building a more powerful lithofacies identification model, though it is complex.
引用
收藏
页码:733 / 752
页数:20
相关论文
共 65 条
[1]  
Al-Anazi A., 2010, Natural Resources Research, V19, P125, DOI [10.1007/s11053-010-9118-9, DOI 10.1007/S11053-010-9118-9]
[2]   Improving the prediction of petroleum reservoir characterization with a stacked generalization ensemble model of support vector machines [J].
Anifowose, Fatai ;
Labadin, Jane ;
Abdulraheem, Abdulazeez .
APPLIED SOFT COMPUTING, 2015, 26 :483-496
[3]   Logging Lithology Discrimination in the Prototype Similarity Space With Random Forest [J].
Ao, Yile ;
Li, Hongqi ;
Zhu, Liping ;
Ali, Sikandar ;
Yang, Zhongguo .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (05) :687-691
[4]   An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J].
Bauer, E ;
Kohavi, R .
MACHINE LEARNING, 1999, 36 (1-2) :105-139
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[7]  
Breiman L., 2015, ENCY ECOL, V57, P582, DOI DOI 10.1007/978-3-642-57292-0_10
[8]   Evaluation of machine learning methods for lithology classification using geophysical data [J].
Bressan, Thiago Santi ;
de Souza, Marcelo Kehl ;
Girelli, Tiago J. ;
Chemale Junior, Farid .
COMPUTERS & GEOSCIENCES, 2020, 139
[9]  
Bühlmann P, 2002, ANN STAT, V30, P927
[10]  
Busch M., 1987, SPE formation evaluation, vol, V2, P412, DOI [10.2118/14301-PA, DOI 10.2118/14301-PA]