A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction

被引:362
作者
Pudjihartono, Nicholas [1 ]
Fadason, Tayaza [1 ,2 ]
Kempa-Liehr, Andreas W. [3 ]
O'Sullivan, Justin M. [1 ,2 ,4 ,5 ,6 ]
机构
[1] Univ Auckland, Liggins Inst, Auckland, New Zealand
[2] Maurice Wilkins Ctr Mol Biodiscovery, Auckland, New Zealand
[3] Univ Auckland, Dept Engn Sci, Auckland, New Zealand
[4] Univ Southampton, MRC Lifecourse Epidemiol Unit, Southampton, England
[5] ASTAR, Singapore Inst Clin Sci, Singapore, Singapore
[6] Garvan Inst Med Res, Australian Parkinsons Mission, Sydney, NSW, Australia
来源
FRONTIERS IN BIOINFORMATICS | 2022年 / 2卷
关键词
machine learing; feature selection (FS); risk prediction; disease risk prediction; statistical approaches; GENOME-WIDE ASSOCIATION; ROBUST FEATURE-SELECTION; FALSE DISCOVERY RATE; MUTUAL INFORMATION; RANDOM FORESTS; GENE; RELEVANCE; LOCI; GWAS; DIMENSIONALITY;
D O I
10.3389/fbinf.2022.927312
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Machine learning has shown utility in detecting patterns within large, unstructured, and complex datasets. One of the promising applications of machine learning is in precision medicine, where disease risk is predicted using patient genetic data. However, creating an accurate prediction model based on genotype data remains challenging due to the so-called "curse of dimensionality" (i.e., extensively larger number of features compared to the number of samples). Therefore, the generalizability of machine learning models benefits from feature selection, which aims to extract only the most "informative" features and remove noisy "non-informative," irrelevant and redundant features. In this article, we provide a general overview of the different feature selection methods, their advantages, disadvantages, and use cases, focusing on the detection of relevant features (i.e., SNPs) for disease risk prediction.
引用
收藏
页数:17
相关论文
共 133 条
[61]   Wrappers for feature subset selection [J].
Kohavi, R ;
John, GH .
ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) :273-324
[62]  
Koller D., 1996, Machine Learning. Proceedings of the Thirteenth International Conference (ICML '96), P284
[63]  
Kononenko I, 1994, EUR C MACH LEARN, P171, DOI DOI 10.1007/3-540-57868-4_57
[64]   RECENT TRENDS IN PHOTOAFFINITY-LABELING [J].
KOTZYBAHIBERT, F ;
KAPFER, I ;
GOELDNER, M .
ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, 1995, 34 (12) :1296-1312
[65]   Risk estimation and risk prediction using machine-learning methods [J].
Kruppa, Jochen ;
Ziegler, Andreas ;
Koenig, Inke R. .
HUMAN GENETICS, 2012, 131 (10) :1639-1654
[66]  
Kubus M., 2018, Folia Oeconomica, V6, P339, DOI [10.18778/0208-6018.339.01, DOI 10.18778/0208-6018.339.01]
[67]  
Kuncheva L. I., 2002, Information Fusion, V3, P245, DOI 10.1016/S1566-2535(02)00093-3
[68]   A new feature selection algorithm based on relevance, redundancy and complementarity [J].
Li, Chao ;
Luo, Xiao ;
Qi, Yanpeng ;
Gao, Zhenbo ;
Lin, Xiaohui .
COMPUTERS IN BIOLOGY AND MEDICINE, 2020, 119
[69]   Application of the GA/KNN method to SELDI proteomics data [J].
Li, LP ;
Umbach, DM ;
Terry, P ;
Taylor, JA .
BIOINFORMATICS, 2004, 20 (10) :1638-1640
[70]   Feature Selection with Conditional Mutual Information Considering Feature Interaction [J].
Liang, Jun ;
Hou, Liang ;
Luan, Zhenhua ;
Huang, Weiping .
SYMMETRY-BASEL, 2019, 11 (07)