On the Black-Box Challenge for Fraud Detection Using Machine Learning (I): Linear Models and Informative Feature Selection

被引:9
作者
Chaquet-Ulldemolins, Jacobo [1 ]
Gimeno-Blanes, Francisco-Javier [2 ]
Moral-Rubio, Santiago [3 ]
Munoz-Romero, Sergio [1 ,3 ]
Rojo-Alvarez, Jose-Luis [1 ,3 ]
机构
[1] Univ Rey Juan Carlos, Dept Signal Theory & Commun Telemat & Comp Syst, Madrid 28942, Spain
[2] Univ Miguel Hernandez, Dept Signal Theory & Commun, Elche 03202, Spain
[3] Univ Rey Juan Carlos, Inst Data Complex Networks & Cybersecur Sci DCNC, Madrid 28028, Spain
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 07期
关键词
credit fraud detection; explainable machine learning; interpretability; feature selection;
D O I
10.3390/app12073328
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) is rapidly shaping the global financial market and its services due to the great competence that it has shown for analysis and modeling in many disciplines. What is especially remarkable is the potential that these techniques could offer to the challenging reality of credit fraud detection (CFD); but it is not easy, even for financial institutions, to keep in strict compliance with non-discriminatory and data protection regulations while extracting all the potential that these powerful new tools can provide to them. This reality effectively restricts nearly all possible AI applications to simple and easy to trace neural networks, preventing more advanced and modern techniques from being applied. The aim of this work was to create a reliable, unbiased, and interpretable methodology to automatically evaluate CFD risk. Therefore, we propose a novel methodology to address the mentioned complexity when applying machine learning (ML) to the CFD problem that uses state-of-the-art algorithms capable of quantifying the information of the variables and their relationships. This approach offers a new form of interpretability to cope with this multifaceted situation. Applied first is a recent published feature selection technique, the informative variable identifier (IVI), which is capable of distinguishing among informative, redundant, and noisy variables. Second, a set of innovative recurrent filters defined in this work are applied, which aim to minimize the training-data bias, namely, the recurrent feature filter (RFF) and the maximally-informative feature filter (MIFF). Finally, the output is classified by using compelling ML techniques, such as gradient boosting, support vector machine, linear discriminant analysis, and linear regression. These defined models were applied both to a synthetic database, for better descriptive modeling and fine tuning, and then to a real database. Our results confirm that our proposal yields valuable interpretability by identifying the informative features' weights that link original variables with final objectives. Informative features were living beyond one's means, lack or absence of a transaction trail, and unexpected overdrafts, which are consistent with other published works. Furthermore, we obtained 76% accuracy in CFD, which represents an improvement of more than 4% in the real databases compared to other published works. We conclude that with the use of the presented methodology, we do not only reduce dimensionality, but also improve the accuracy, and trace relationships among input and output features, bringing transparency to the ML reasoning process. The results obtained here were used as a starting point for the companion paper which reports on our extending the interpretability to nonlinear ML architectures.
引用
收藏
页数:26
相关论文
共 38 条
[1]  
Ana F, 2019, ARTIF INTELL
[2]  
[Anonymous], 1998, N Y
[3]  
[Anonymous], 2019, MACH LEARN UK FIN SE
[4]  
[Anonymous], 2018, COMPUT RES REPOS
[5]  
Bellman R., 1961, Adaptive Control Processes: a Guided Tour, VXVI, P255
[6]   A comparative analysis of gradient boosting algorithms [J].
Bentejac, Candice ;
Csorgo, Anna ;
Martinez-Munoz, Gonzalo .
ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (03) :1937-1967
[7]  
Bertsimas D., 2019, ARXIV190703419
[8]  
Brause Rudiger W., 1999, P 11 INT C TOOLS ART
[9]  
Buchanan B, 2019, ARTIF INTELL
[10]   Machine Learning Interpretability: A Survey on Methods and Metrics [J].
Carvalho, Diogo, V ;
Pereira, Eduardo M. ;
Cardoso, Jaime S. .
ELECTRONICS, 2019, 8 (08)