Consumer credit risk: Individual probability estimates using machine learning

被引:123
作者
Kruppa, Jochen [1 ]
Schwarz, Alexandra [2 ]
Arminger, Gerhard [2 ]
Ziegler, Andreas [1 ]
机构
[1] Univ Lubeck, Univ Klinikum Schleswig Holstein, Inst Med Biometrie & Stat, D-23562 Lubeck, Germany
[2] Univ Wuppertal, Schumpeter Sch Business & Econ, D-42097 Wuppertal, Germany
关键词
Probability estimation; Random forest; Credit scoring; Probability machines; Logistic regression; Machine learning; IMPROVED CONFIDENCE-INTERVALS; CLASSIFICATION ALGORITHMS; RANDOM FORESTS; CONVERGENCE; PERFORMANCE; CONSISTENCY; PREDICTION; REGRESSION;
D O I
10.1016/j.eswa.2013.03.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Consumer credit scoring is often considered a classification task where clients receive either a good or a bad credit status. Default probabilities provide more detailed information about the creditworthiness of consumers, and they are usually estimated by logistic regression. Here, we present a general framework for estimating individual consumer credit risks by use of machine learning methods. Since a probability is an expected value, all nonparametric regression approaches which are consistent for the mean are consistent for the probability estimation problem. Among others, random forests (RF), k-nearest neighbors (kNN), and bagged k-nearest neighbors (bNN) belong to this class of consistent nonparametric regression approaches. We apply the machine learning methods and an optimized logistic regression to a large dataset of complete payment histories of short-termed installment credits. We demonstrate probability estimation in Random Jungle, an RF package written in C++ with a generalized framework for fast tree growing, probability estimation, and classification. We also describe an algorithm for tuning the terminal node size for probability estimation. We demonstrate that regression RF outperforms the optimized logistic regression model, kNN, and bNN on the test data of the short-term installment credits. (c) 2013 Elsevier Ltd. All rights reserved.
引用
收藏
页码:5125 / 5131
页数:7
相关论文
共 48 条
[1]   Simple improved confidence intervals for comparing matched proportions [J].
Agresti, A ;
Min, YY .
STATISTICS IN MEDICINE, 2005, 24 (05) :729-740
[2]  
Arminger G, 1997, COMPUTATION STAT, V12, P293
[3]   Benchmarking state-of-the-art classification algorithms for credit scoring [J].
Baesens, B ;
Van Gestel, T ;
Viaene, S ;
Stepanova, M ;
Suykens, J ;
Vanthienen, J .
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2003, 54 (06) :627-635
[4]   Identifying representative trees from ensembles [J].
Banerjee, Mousumi ;
Ding, Ying ;
Noone, Anne-Michelle .
STATISTICS IN MEDICINE, 2012, 31 (15) :1601-1616
[5]   An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J].
Bauer, E ;
Kohavi, R .
MACHINE LEARNING, 1999, 36 (1-2) :105-139
[6]  
Biau G, 2012, J MACH LEARN RES, V13, P1063
[7]   On the layered nearest neighbour estimate, the bagged nearest neighbour estimate and the random forest method in regression and classification [J].
Biau, Gerard ;
Devroye, Luc .
JOURNAL OF MULTIVARIATE ANALYSIS, 2010, 101 (10) :2499-2518
[8]  
Biau G, 2010, J MACH LEARN RES, V11, P687
[9]   Rates of Convergence of the Functional k-Nearest Neighbor Estimate [J].
Biau, Gerard ;
Cerou, Frederic ;
Guyader, Arnaud .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2010, 56 (04) :2034-2040
[10]  
Biau G, 2008, J MACH LEARN RES, V9, P2015