EFFICIENT AND ADAPTIVE LINEAR REGRESSION IN SEMI-SUPERVISED SETTINGS

被引:54
作者
Chakrabortty, Abhishek [1 ]
Cai, Tianxi [2 ]
机构
[1] Univ Penn, Dept Stat, 3730 Walnut St,Jon M Huntsman Hall,4th Floor, Philadelphia, PA 19104 USA
[2] Harvard Univ, Dept Biostat, 655 Huntington Ave,Bldg 2,4th Floor, Boston, MA 02115 USA
基金
美国国家卫生研究院;
关键词
Semi-supervised linear regression; semiparametric inference; model mis-specification; adaptive estimation; semi-nonparametric imputation; SLICED INVERSE REGRESSION; PRINCIPAL HESSIAN DIRECTIONS; DIMENSION REDUCTION; KERNEL ESTIMATION; DISCOVERY; RECORDS; SAMPLES;
D O I
10.1214/17-AOS1594
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We consider the linear regression problem under semi-supervised settings wherein the available data typically consists of: (i) a small or moderate sized "labeled" data, and (ii) a much larger sized "unlabeled" data. Such data arises naturally from settings where the outcome, unlike the covariates, is expensive to obtain, a frequent scenario in modern studies involving large databases like electronic medical records (EMR). Supervised estimators like the ordinary least squares (OLS) estimator utilize only the labeled data. It is often of interest to investigate if and when the unlabeled data can be exploited to improve estimation of the regression parameter in the adopted linear model. In this paper, we propose a class of "Efficient and Adaptive Semi-Supervised Estimators" (EASE) to improve estimation efficiency. The EASE are two-step estimators adaptive to model mis-specification, leading to improved (optimal in some cases) efficiency under model mis-specification, and equal (optimal) efficiency under a linear model. This adaptive property, often unaddressed in the existing literature, is crucial for advocating "safe" use of the unlabeled data. The construction of EASE primarily involves a flexible "semi-nonparametric" imputation, including a smoothing step that works well even when the number of covariates is not small; and a follow up "refitting" step along with a cross-validation (CV) strategy both of which have useful practical as well as theoretical implications towards addressing two important issues: under-smoothing and over-fitting. We establish asymptotic results including consistency, asymptotic normality and the adaptive properties of EASE. We also provide influence function expansions and a "double" CV strategy for inference. The results are further validated through extensive simulations, followed by application to an EMR study on auto-immunity.
引用
收藏
页码:1541 / 1572
页数:32
相关论文
共 30 条
[1]   NONPARAMETRIC KERNEL ESTIMATION FOR SEMIPARAMETRIC MODELS [J].
ANDREWS, DWK .
ECONOMETRIC THEORY, 1995, 11 (03) :560-596
[2]  
[Anonymous], 2006, BOOK REV IEEE T NEUR
[3]  
[Anonymous], 2008, SEMISUPERVISED LEARN
[4]  
[Anonymous], 2000, Proceedings of the Seventeenth International Conference on Machine Learning
[5]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[6]   ON THE EXPONENTIAL VALUE OF LABELED SAMPLES [J].
CASTELLI, V ;
COVER, TM .
PATTERN RECOGNITION LETTERS, 1995, 16 (01) :105-111
[7]   The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter [J].
Castelli, V ;
Cover, TM .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1996, 42 (06) :2102-2117
[8]  
CHAKRABORTTY A., 2018, EFFICIENT ADAPTIVE S, DOI [10.1214/17-AOS1594SUPP, DOI 10.1214/17-AOS1594SUPP]
[9]  
COOK RD, 1991, J AM STAT ASSOC, V86, P328, DOI 10.2307/2290564
[10]   Dimension reduction in binary response regression [J].
Cook, RD ;
Lee, H .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1999, 94 (448) :1187-1200