The effect of resampling techniques on the performances of machine learning clinical risk prediction models in the setting of severe class imbalance: development and internal validation in a retrospective cohort

被引:0
作者
Ke, Janny Xue Chen [1 ,2 ,3 ]
DhakshinaMurthy, Arunachalam [4 ]
George, Ronald B. [5 ]
Branco, Paula [6 ]
机构
[1] Department of Anesthesia, St. Paul’s Hospital, Providence Health Care, 1081 Burrard Street, Vancouver, V6Z1Y6, BC
[2] Department of Anesthesiology, Pharmacology and Therapeutics, University of British Columbia, Vancouver, BC
[3] Perioperative Medicine, Dalhousie University, Halifax, NS
[4] School of Computer Science, Carleton University, Ottawa, ON
[5] Mount Sinai Hospital, University of Toronto, Toronto, ON
[6] School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON
来源
Discover Artificial Intelligence | 2024年 / 4卷 / 01期
基金
加拿大自然科学与工程研究理事会;
关键词
Anesthesiology; Class imbalance; Machine learning; Predictive modeling; Resampling; Risk prediction;
D O I
10.1007/s44163-024-00199-0
中图分类号
学科分类号
摘要
Purpose: The availability of population datasets and machine learning techniques heralded a new era of sophisticated prediction models involving a large number of routinely collected variables. However, severe class imbalance in clinical datasets is a major challenge. The aim of this study is to investigate the impact of commonly-used resampling techniques in combination with commonly-used machine learning algorithms in a clinical dataset, to determine whether combination(s) of these approaches improve upon the original multivariable logistic regression with no resampling. Methods: We previously developed and internally validated a multivariable logistic regression 30-day mortality prediction model in 30,619 patients using preoperative and intraoperative features. Using the same dataset, we systematically evaluated and compared model performances after application of resampling techniques [random under-sampling, near miss under-sampling, random oversampling, and synthetic minority oversampling (SMOTE)] in combination with machine learning algorithms (logistic regression, elastic net, decision trees, random forest, and extreme gradient boosting). Results: We found that in the setting of severe class imbalance, the impact of resampling techniques on model performance varied by the machine learning algorithm and the evaluation metric. Existing resampling techniques did not meaningfully improve area under receiving operating curve (AUROC). The area under the precision recall curve (AUPRC) was only increased by random under-sampling and SMOTE for decision trees, and oversampling and SMOTE for extreme gradient boosting. Importantly, some combinations of algorithm and resampling technique decreased AUROC and AUPRC compared to no resampling. Conclusion: Existing resampling techniques had a variable impact on models, depending on the algorithms and the evaluation metrics. Future research is needed to improve predictive performances in the setting of severe class imbalance. © The Author(s) 2024.
引用
收藏
相关论文
共 41 条
  • [21] Scikit-learn: Machine learning in Python—scikit-learn 1.1.2 documentation [Internet, Cited 2022 Sep 8]. Available From
  • [22] Bergstra J., Yamins D., Cox D.D., Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures, : Proceedings of the 30Th International Conference on International Conference on Machine Learning - Volume 28. Atlanta, GA. Jmlr.Org
  • [23] 2013. P. I-115–I-123. (ICML’13).
  • [24] Imbalanced-learn documentation—Version 0.9.1 [Internet, Cited 2022 Sep 8]. Available From
  • [25] Kubat M., Matwin S., Addressing the curse of imbalanced training sets: one-sided selection, Fourteenth Int Conf Mach Learn, 97, 1, pp. 1-8, (1997)
  • [26] Estabrooks A., Jo T., Japkowicz N., A multiple resampling method for learning from imbalanced data sets, Comput Intell, 20, 1, pp. 18-36, (2004)
  • [27] Zhang J., Mani I., KNN approach to unbalanced data distributions: A case study involving information extraction, Proceedings of the ICML’2003 Workshop on Learning from Imbalanced Datasets, (2003)
  • [28] Chawla N.V., Bowyer K.W., Hall L.O., Kegelmeyer W.P., SMOTE: synthetic minority over-sampling technique, J Artif Intell Res, 16, 1, pp. 321-357, (2002)
  • [29] Fernandez A., Garcia S., Herrera F., Chawla N.V., SMOTE for learning from imbalanced data: progress and challenges, marking the 15-year anniversary, J Artif Intell Res, 20, 61, pp. 863-905, (2018)
  • [30] Zou H., Hastie T., Regularization and variable selection via the elastic net, J R Stat Soc Ser B Stat Methodol, 67, 2, pp. 301-320, (2005)