Testing conditional independence in supervised learning algorithms

被引:30
作者
Watson, David S. [1 ]
Wright, Marvin N. [2 ,3 ]
机构
[1] UCL, Dept Stat Sci, London, England
[2] Leibniz Inst Prevent Res & Epidemiol BIPS, Bremen, Germany
[3] Univ Bremen, Fac Math & Comp Sci, Bremen, Germany
关键词
Knockoffs; Machine learning; Conditional independence; Markov blanket; Variable importance; FALSE DISCOVERY RATE; FEATURE-SELECTION; MODELS; STABILITY;
D O I
10.1007/s10994-021-06030-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set. Building on the knockoff framework of Candes et al. (J R Stat Soc Ser B 80:551-577, 2018), we develop a novel testing procedure that works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. The CPI can be efficiently computed for high-dimensional data without any sparsity constraints. We demonstrate convergence criteria for the CPI and develop statistical inference procedures for evaluating its magnitude, significance, and precision. These tests aid in feature and model selection, extending traditional frequentist and Bayesian techniques to general supervised learning tasks. The CPI may also be applied in causal discovery to identify underlying multivariate graph structures. We test our method using various algorithms, including linear regression, neural networks, random forests, and support vector machines. Empirical results show that the CPI compares favorably to alternative variable importance measures and other nonparametric tests of conditional independence on a diverse array of real and synthetic datasets. Simulations confirm that our inference procedures successfully control Type I error with competitive power in a range of settings. Our method has been implemented in an R package, cpi, which can be downloaded from https://github.com/dswatson/cpi.
引用
收藏
页码:2107 / 2129
页数:23
相关论文
共 84 条
[1]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[2]   CONTROLLING THE FALSE DISCOVERY RATE VIA KNOCKOFFS [J].
Barber, Rina Foygel ;
Candes, Emmanuel J. .
ANNALS OF STATISTICS, 2015, 43 (05) :2055-2085
[3]   Metropolized Knockoff Sampling [J].
Bates, Stephen ;
Candes, Emmanuel ;
Janson, Lucas ;
Wang, Wenshuo .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2021, 116 (535) :1413-1427
[4]  
Benjamini Y, 2001, ANN STAT, V29, P1165
[5]   CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING [J].
BENJAMINI, Y ;
HOCHBERG, Y .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) :289-300
[6]   The conditional permutation test for independence while controlling for confounders [J].
Berrett, Thomas B. ;
Wang, Yi ;
Barber, Rina Foygel ;
Samworth, Richard J. .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2020, 82 (01) :175-197
[7]  
Bischl B, 2016, J MACH LEARN RES, V17
[8]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[9]   Panning for gold: "model-X' knockoffs for high dimensional controlled variable selection [J].
Candes, Emmanuel ;
Fan, Yingying ;
Janson, Lucas ;
Lv, Jinchi .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2018, 80 (03) :551-577
[10]  
Doran G, 2014, UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, P132