Handling high-dimensional data with missing values by modern machine learning techniques

被引:9
作者
Chen, Sixia [1 ]
Xu, Chao [1 ]
机构
[1] Univ Oklahoma, Dept Biostat & Epidemiol, Hlth Sci Ctr, Oklahoma City, OK 73126 USA
基金
美国国家卫生研究院;
关键词
Deep learning; high-dimensional data; imputation; machine learning; missing data; JACKKNIFE VARIANCE-ESTIMATION; MULTIPLE IMPUTATION; FRACTIONAL IMPUTATION; ITEM NONRESPONSE; INFERENCE; VARIABLES; SELECTION;
D O I
10.1080/02664763.2022.2068514
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
High-dimensional data have been regarded as one of the most important types of big data in practice. It happens frequently in practice including genetic study, financial study, and geographical study. Missing data in high dimensional data analysis should be handled properly to reduce nonresponse bias. We discuss some modern machine learning techniques including penalized regression approaches, tree-based approaches, and deep learning (DL) for handling missing data with high dimensionality. Specifically, our proposed methods can be used for estimating general parameters of interest including population means and percentiles with imputation-based estimators, propensity score estimators, and doubly robust estimators. We compare those methods through some limited simulation studies and a real application. Both simulation studies and real application show the benefits of DL and XGboost approaches compared with other methods in terms of balancing bias and variance.
引用
收藏
页码:786 / 804
页数:19
相关论文
共 67 条
[21]   Statistical significance of variables driving systematic variation in high-dimensional data [J].
Chung, Neo Christopher ;
Storey, John D. .
BIOINFORMATICS, 2015, 31 (04) :545-554
[22]   Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series [J].
Dal Molin Ribeiro, Matheus Henrique ;
Coelho, Leandro dos Santos .
APPLIED SOFT COMPUTING, 2020, 86
[23]   A simple approach to the generation of uniformly distributed random variables with prescribed correlations [J].
Falk, M .
COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 1999, 28 (03) :785-791
[24]   Variable selection via nonconcave penalized likelihood and its oracle properties [J].
Fan, JQ ;
Li, RZ .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2001, 96 (456) :1348-1360
[25]  
Fern Xiaoli Zhang, 2003, P 20 INT C MACH LEAR, P186, DOI DOI 10.5555/3041838.3041862
[26]   Greedy function approximation: A gradient boosting machine [J].
Friedman, JH .
ANNALS OF STATISTICS, 2001, 29 (05) :1189-1232
[27]   Beyond the hype: Big data concepts, methods, and analytics [J].
Gandomi, Amir ;
Haider, Murtaza .
INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2015, 35 (02) :137-144
[28]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[29]   Estimation with missing data: beyond double robustness [J].
Han, Peisong ;
Wang, Lu .
BIOMETRIKA, 2013, 100 (02) :417-430
[30]  
Hastie T., 2009, The elements of statistical learning: data mining, inference, and prediction: with 200 full-color illustrations (Springer series in statistics)