Stroke Treatment Prediction Using Features Selection Methods and Machine Learning Classifiers

被引:5
作者
Chourib, I. [1 ,2 ]
Guillard, G. [3 ]
Farah, I. R. [1 ]
Solaiman, B. [2 ]
机构
[1] Natl Sch Comp Sci, STICODE Dept, RIADI Lab, Manouba, Tunisia
[2] IMT Atlantique, MATHSTIC Dept, ITI Lab, Brest, France
[3] Intradys, Brest, France
关键词
Stroke disease; Feature selection; Data mining; Decision tree classifier; Naive Bayes; K-nearest neighbor; Recursive feature elimination; Tree-based model; Chi-square; CLASSIFICATION;
D O I
10.1016/j.irbm.2022.02.002
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objectives: Feature selection in data sets is an important task allowing to alleviate various machine learning and data mining issues. The main objectives of a feature selection method consist on building simpler and more understandable classifier models in order to improve the data mining and processing performances. Therefore, a comparative evaluation of the Chi-square method, recursive feature elimination method, and tree-based method (using Random Forest) used on the three common machine learning methods (K-Nearest Neighbor, naive Bayesian classifier and decision tree classifier) are performed to select the most relevant primitives from a large set of attributes. Furthermore, determining the most suitable couple (i.e., feature selection method-machine learning method) that provides the best performance is performed.Materials and methods: In this paper, an overview of the most common feature selection techniques is first provided: the Chi-Square method, the Recursive Feature Elimination method (RFE) and the tree-based method (using Random Forest). A comparative evaluation of the improvement (brought by such feature selection methods) to the three common machine learning methods (K-Nearest Neighbor, naive Bayesian classifier and decision tree classifier) are performed. For evaluation purposes, the following measures: micro-F1, accuracy and root mean square error are used on the stroke disease data set.Results: The obtained results show that the proposed approach (i.e., Tree Based Method using Random Forest, TBM-RF, decision tree classifier, DTC) provides accuracy higher than 85%, F1-score higher than 88%, thus, better than the KNN and NB using the Chi-Square, RFE and TBM-RF methods.Conclusion: This study shows that the couple -Tree Based Method using Random Forest (TBM-RF) decision tree classifier successfully and efficiently contributes to find the most relevant features and to predict and classify patient suffering of stroke disease."(c) 2022 AGBM. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页码:678 / 686
页数:9
相关论文
共 40 条
[21]  
Kleinbaum D.G., 2002, Logistic regression
[22]   Wrappers for feature subset selection [J].
Kohavi, R ;
John, GH .
ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) :273-324
[23]  
Koizumi Y, 2019, INT CONF ACOUST SPEE, P915, DOI [10.1109/icassp.2019.8683667, 10.1109/ICASSP.2019.8683667]
[24]   Text categorization with support vector machines.: How to represent texts in input space? [J].
Leopold, E ;
Kindermann, J .
MACHINE LEARNING, 2002, 46 (1-3) :423-444
[25]   Robust Synchrony and Rhythmogenesis in Endocrine Neurons via Autocrine Regulations In Vitro and In Vivo [J].
Li, Yue-Xian ;
Khadra, Anmar .
BULLETIN OF MATHEMATICAL BIOLOGY, 2008, 70 (08) :2103-2125
[26]   Toward integrating feature selection algorithms for classification and clustering [J].
Liu, H ;
Yu, L .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (04) :491-502
[27]  
Liu H, 1995, PROC INT C TOOLS ART, P388, DOI 10.1109/TAI.1995.479783
[28]  
Liu H., 1998, Feature Extraction, Construction and Selection: A Data Mining Perspective, V453
[29]   Feature selection algorithms: A survey and experimental evaluation [J].
Molina, LC ;
Belanche, L ;
Nebot, A .
2002 IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2002, :306-313
[30]  
Neapolitan R. E, 2004, Learning Bayesian Networks, V38