Summary Intervals for Model-Based Classification Accuracy and Consistency Indices

被引:3
作者
Gonzalez, Oscar [1 ]
机构
[1] Univ North Carolina, Chapel Hill, NC 27599 USA
关键词
classification accuracy; classification consistency; confidence intervals; factor model; screening; STATISTICS; IMPACT;
D O I
10.1177/00131644221092347
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the linear factor model have been recently proposed, but parameter uncertainty of the CA and CC indices has not been investigated. This article demonstrates how to estimate percentile bootstrap confidence intervals and Bayesian credible intervals for CA and CC indices, which have the added benefit of incorporating the sampling variability of the parameters of the linear factor model to summary intervals. Results from a small simulation study suggest that percentile bootstrap confidence intervals have appropriate confidence interval coverage, although displaying a small negative bias. However, Bayesian credible intervals with diffused priors have poor interval coverage, but their coverage improves once empirical, weakly informative priors are used. The procedures are illustrated by estimating CA and CC indices from a measure used to identify individuals low on mindfulness for a hypothetical intervention, and R code is provided to facilitate the implementation of the procedures.
引用
收藏
页码:240 / 261
页数:22
相关论文
共 32 条
[1]  
[Anonymous], 2014, Standards for educational and psychological testing
[2]   Convergence assessment techniques for Markov chain Monte Carlo [J].
Brooks, SP ;
Roberts, GO .
STATISTICS AND COMPUTING, 1998, 8 (04) :319-335
[3]   The benefits of being present: Mindfulness and its role in psychological well-being [J].
Brown, KW ;
Ryan, RM .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 2003, 84 (04) :822-848
[4]   EXPLAINING THE GIBBS SAMPLER [J].
CASELLA, G ;
GEORGE, EI .
AMERICAN STATISTICIAN, 1992, 46 (03) :167-174
[5]   A Diagnostic Procedure to Detect Departures From Local Independence in Item Response Theory Models [J].
Edwards, Michael C. ;
Houts, Carrie R. ;
Cai, Li .
PSYCHOLOGICAL METHODS, 2018, 23 (01) :138-149
[6]   A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis [J].
Edwards, Michael C. .
PSYCHOMETRIKA, 2010, 75 (03) :474-497
[7]  
Efron B., 1993, INTRO BOOTSTRAP, P436
[8]   Applying novel technologies and methods to inform the ontology of self regulation [J].
Eisenberg, Ian W. ;
Bissett, Patrick G. ;
Canning, Jessica R. ;
Dallery, Jesse ;
Enkavi, A. Zeynep ;
Whitfield-Gabrieli, Susan ;
Gonzalez, Oscar ;
Green, Alan I. ;
Greene, Mary Ann ;
Kiernan, Michaela ;
Kim, Sunny Jung ;
Li, Jamie ;
Lowe, Michael R. ;
Mazza, Gina L. ;
Metcalf, Stephen A. ;
Onken, Lisa ;
Parikh, Sadev S. ;
Peters, Ellen ;
Prochaska, Judith J. ;
Scherer, Emily A. ;
Stoeckel, Luke E. ;
Valente, Matthew J. ;
Wu, Jialing ;
Xie, Haiyi ;
MacKinnon, David P. ;
Marsch, Lisa A. ;
Poldrack, Russell A. .
BEHAVIOUR RESEARCH AND THERAPY, 2018, 101 :46-57
[9]  
Gelman A., 2004, Bayesian Data Analysis, VSecond
[10]   Estimating Classification Consistency of Screening Measures and Quantifying the Impact of Measurement Bias [J].
Gonzalez, Oscar ;
Georgeson, A. R. ;
Pelham, William E., III ;
Fouladi, Rachel T. .
PSYCHOLOGICAL ASSESSMENT, 2021, 33 (07) :596-609