Ensemble-Based Machine Learning Algorithm for Loan Default Risk Prediction

被引:2
作者
Akinjole, Abisola [1 ]
Shobayo, Olamilekan [1 ]
Popoola, Jumoke [1 ]
Okoyeigbo, Obinna [2 ]
Ogunleye, Bayode [3 ]
机构
[1] Sheffield Hallam Univ, Dept Comp, Sheffield S1 2NU, England
[2] Edge Hill Univ, Dept Psychol, Ormskirk L39 4QP, England
[3] Univ Brighton, Dept Comp & Math, Brighton BN2 4GJ, England
关键词
credit default prediction; deep learning; ensemble learning; machine learning; CREDIT; NETWORK; TREES; SMOTE;
D O I
10.3390/math12213423
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Predicting credit default risk is important to financial institutions, as accurately predicting the likelihood of a borrower defaulting on their loans will help to reduce financial losses, thereby maintaining profitability and stability. Although machine learning models have been used in assessing large applications with complex attributes for these predictions, there is still a need to identify the most effective techniques for the model development process, including the technique to address the issue of data imbalance. In this research, we conducted a comparative analysis of random forest, decision tree, SVMs (Support Vector Machines), XGBoost (Extreme Gradient Boosting), ADABoost (Adaptive Boosting) and the multi-layered perceptron, to predict credit defaults using loan data from LendingClub. Additionally, XGBoost was used as a framework for testing and evaluating various techniques. Moreover, we applied this XGBoost framework to handle the issue of class imbalance observed, by testing various resampling methods such as Random Over-Sampling (ROS), the Synthetic Minority Over-Sampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), Random Under-Sampling (RUS), and hybrid approaches like the SMOTE with Tomek Links and the SMOTE with Edited Nearest Neighbours (SMOTE + ENNs). The results showed that balanced datasets significantly outperformed the imbalanced dataset, with the SMOTE + ENNs delivering the best overall performance, achieving an accuracy of 90.49%, a precision of 94.61% and a recall of 92.02%. Furthermore, ensemble methods such as voting and stacking were employed to enhance performance further. Our proposed model achieved an accuracy of 93.7%, a precision of 95.6% and a recall of 95.5%, which shows the potential of ensemble methods in improving credit default predictions and can provide lending platforms with the tool to reduce default rates and financial losses. In conclusion, the findings from this study have broader implications for financial institutions, offering a robust approach to risk assessment beyond the LendingClub dataset.
引用
收藏
页数:31
相关论文
共 68 条
[1]   Sampling: Why and How of it? [J].
Acharya, Anita S. ;
Prakash, Anupam ;
Saxena, Pikee ;
Nigam, Aruna .
INDIAN JOURNAL OF MEDICAL SPECIALITIES, 2013, 4 (02) :330-333
[2]  
Acharya Viral., 2009, FINANCIAL CRISIS 200
[3]   An Investigation of Credit Card Default Prediction in the Imbalanced Datasets [J].
Alam, Talha Mahboob ;
Shaukat, Kamran ;
Hameed, Ibrahim A. ;
Luo, Suhuai ;
Sarwar, Muhammad Umer ;
Shabbir, Shakir ;
Li, Jiaming ;
Khushi, Matloob .
IEEE ACCESS, 2020, 8 :201173-201198
[4]   Data preprocessing in predictive data mining [J].
Alexandropoulos, Stamatios-Aggelos N. ;
Kotsiantis, Sotiris B. ;
Vrahatis, Michael N. .
KNOWLEDGE ENGINEERING REVIEW, 2019, 34
[5]   Machine Learning from Theory to Algorithms: An Overview [J].
Alzubi, Jafar ;
Nayyar, Anand ;
Kumar, Akshi .
SECOND NATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE (NCCI 2018), 2018, 1142
[6]   Assessment of Support Vector Machine performance for default prediction and credit rating [J].
Amzile, Karim ;
Habachi, Mohamed .
BANKS AND BANK SYSTEMS, 2022, 17 (01)
[7]  
[Anonymous], 1999, J. Artif. Intell. Res
[8]   A comparison among interpretative proposals for Random Forests [J].
Aria, Massimo ;
Cuccurullo, Corrado ;
Gnasso, Agostino .
MACHINE LEARNING WITH APPLICATIONS, 2021, 6
[9]  
Baesens B., 2015, Fraud Analytics Using Descriptive, Predictive,and Social Network Techniques: A Guide to Data Science for Fraud Detection
[10]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32