Accelerating item factor analysis on GPU with Python']Python package xifa

被引:0
作者
Huang, Po-Hsien [1 ]
机构
[1] Natl Chengchi Univ, Dept Psychol, 64,Sect 2,Zhi Nan Rd, Taipei City, Taiwan
关键词
Item factor analysis; Item response theory; Deep learning; Parallel computing; MAXIMUM-LIKELIHOOD; STANDARD ERRORS; RESPONSE THEORY; MODEL; PARAMETERS; FUTURE;
D O I
10.3758/s13428-022-02024-x
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Item parameter estimation is a crucial step when conducting item factor analysis (IFA). From the view of frequentist estimation, marginal maximum likelihood (MML) seems to be the gold standard. However, fitting a high-dimensional IFA model by MML is still a challenging task. The current study demonstrates that with the help of a GPU (graphics processing unit) and carefully designed vectorization, the computational time of MML could be largely reduced for large-scale IFA applications. In particular, a Python package called xifa (accelerated item factor analysis) is developed, which implements a vectorized Metropolis-Hastings Robbins-Monro (VMHRM) algorithm. Our numerical experiments show that the VMHRM on a GPU may run 33 times faster than its CPU version. When the number of factors is at least five, VMHRM (on GPU) is much faster than the Bock-Aitkin expectation maximization, MHRM implemented by mirt (on CPU), and the importance-weighted autoencoder (on GPU). The GPU-implemented VMHRM is most appropriate for high-dimensional IFA with large data sets. We believe that GPU computing will play a central role in large-scale psychometric modeling in the near future.
引用
收藏
页码:4403 / 4418
页数:16
相关论文
共 69 条
[41]  
NVIDIA Vingelmann P, 2020, CUDA
[42]  
Paszke A., 2019, ADV NEURAL INF PROCE, V32, P8026
[43]   Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown [J].
Patton, Jeffrey M. ;
Cheng, Ying ;
Yuan, Ke-Hai ;
Diao, Qi .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 2014, 74 (04) :697-712
[44]  
Pedregosa F, 2011, J MACH LEARN RES, V12, P2825, DOI 10.1145/2786984.2786995
[45]   ACCELERATION OF STOCHASTIC-APPROXIMATION BY AVERAGING [J].
POLYAK, BT ;
JUDITSKY, AB .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1992, 30 (04) :838-855
[46]  
Reddi J., 2018, International conference on learning representations, P1
[47]  
Roelstraete B, 2011, J STAT SOFTW, V44, P1
[48]   INFERENCE AND MISSING DATA [J].
RUBIN, DB .
BIOMETRIKA, 1976, 63 (03) :581-590
[49]  
Ruppert D., 1988, Efficient estimations from a slowly convergent Robbins-Monro process
[50]  
SAMEJIMA F, 1969, PSYCHOMETRIKA, V34, P1