Estimating Treatment Effects with the Explanatory Item Response Model

被引:3
|
作者
Gilbert, Joshua B. [1 ]
机构
[1] Harvard Univ, Grad Sch Educ, Cambridge, MA 02138 USA
关键词
Explanatory item response model; causal inference; statistical power; simulation; educational measurement; MISSING-DATA; RASCH MODEL; IRT; PACKAGE; SCORES;
D O I
10.1080/19345747.2023.2287601
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This simulation study examines the characteristics of the Explanatory Item Response Model (EIRM) when estimating treatment effects when compared to classical test theory (CTT) sum and mean scores and item response theory (IRT)-based theta scores. Results show that the EIRM and IRT theta scores provide generally equivalent bias and false positive rates compared to CTT scores and superior calibration of standard errors under model misspecification. Analysis of the statistical power of each method reveals that the EIRM and IRT theta scores are more robust to missing item response data than other methods when parametric assumptions are met and provide a moderate benefit to power under heteroskedasticity, but their performance is mixed under other conditions. The methods are illustrated with an empirical data application examining the causal effect of an elementary school literacy intervention on reading comprehension test scores and demonstrates that the EIRM provides a more precise estimate of the average treatment effect than the CTT or IRT theta score approaches. Tradeoffs of model selection and interpretation are discussed.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Estimating treatment effects with machine learning
    McConnell, K. John
    Lindner, Stephan
    HEALTH SERVICES RESEARCH, 2019, 54 (06) : 1273 - 1282
  • [32] Exploring the posterior of a hierarchical IRT model for item effects
    Janssen, R
    De Boeck, P
    COMPUTATIONAL STATISTICS, 2000, 15 (03) : 421 - 442
  • [33] Projective Item Response Model for Test-Independent Measurement
    Ip, Edward Hak-Sing
    Chen, Shyh-Huei
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2012, 36 (07) : 581 - 601
  • [34] A sharing item response theory model for computerized adaptive testing
    Segall, DO
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2004, 29 (04) : 439 - 460
  • [35] A Speeded Item Response Model: Leave the Harder till Later
    Chang, Yu-Wei
    Tsai, Rung-Ching
    Hsu, Nan-Jung
    PSYCHOMETRIKA, 2014, 79 (02) : 255 - 274
  • [36] Exploring the posterior of a hierarchical IRT model for item effects
    Rianne Janssen
    Paul De Boeck
    Computational Statistics, 2000, 15 (3) : 421 - 442
  • [37] Bayesian Analysis of a Quantile Multilevel Item Response Theory Model
    Zhu, Hongyue
    Gao, Wei
    Zhang, Xue
    FRONTIERS IN PSYCHOLOGY, 2021, 11
  • [38] An Item Response Theory Model for Incorporating Response Times in Forced-Choice Measures
    Guo, Zhichen
    Wang, Daxun
    Cai, Yan
    Tu, Dongbo
    EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 2024, 84 (03) : 450 - 480
  • [39] Using the Testlet Response Model as a Shortcut to Multidimensional Item Response Theory Subscore Computation
    Thissen, David
    NEW DEVELOPMENTS IN QUANTITATIVE PSYCHOLOGY, 2013, 66 : 29 - 40
  • [40] Estimating longitudinal change in latent variable means: a comparison of non-negative matrix factorization and other item non-response methods
    Ayilara, Olawale F.
    Sajobi, Tolulope T.
    Barclay, Ruth
    Jozani, Mohammad Jafari
    Lix, Lisa M.
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2023, 93 (02) : 211 - 230