Amortised Design Optimization for Item Response Theory

被引:1
作者
Keurulainen, Antti [1 ,2 ]
Westerlund, Isak [2 ]
Keurulainen, Oskar [2 ]
Howes, Andrew [1 ,3 ]
机构
[1] Aalto Univ, Espoo, Finland
[2] Bitville Oy, Espoo, Finland
[3] Univ Birmingham, Birmingham, England
来源
ARTIFICIAL INTELLIGENCE IN EDUCATION. POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2023 | 2023年 / 1831卷
关键词
Item Response Theory (IRT); Experimental Design; Deep Reinforcement Learning (DRL);
D O I
10.1007/978-3-031-36336-8_56
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Item Response Theory (IRT) is a well known method for assessing responses from humans in education and psychology. In education, IRT is used to infer student abilities and characteristics of test items from student responses. Interactions with students are expensive, calling for methods that efficiently gather information for inferring student abilities. Methods based on Optimal Experimental Design (OED) are computationally costly, making them inapplicable for interactive applications. In response, we propose incorporating amortised experimental design into IRT. Here, the computational cost is shifted to a precomputing phase by training a Deep Reinforcement Learning (DRL) agent with synthetic data. The agent is trained to select optimally informative test items for the distribution of students, and to conduct amortised inference conditioned on the experiment outcomes. During deployment the agent estimates parameters from data, and suggests the next test item for the student, in close to real-time, by taking into account the history of experiments and outcomes.
引用
收藏
页码:359 / 364
页数:6
相关论文
共 14 条
[1]  
Blau T, 2022, PR MACH LEARN RES
[2]  
CORBETT AT, 1994, USER MODEL USER-ADAP, V4, P253, DOI 10.1007/BF01099821
[3]  
Dywel M., 2022, P 15 INT C ED DATA M, P17
[4]  
Foster A, 2021, PR MACH LEARN RES, V139
[5]  
Ghosh A, 2021, arXiv
[6]  
HAMBLETON R.K., 1985, ITEM RESPONSE THEORY
[7]  
Raffin A, 2021, J MACH LEARN RES, V22, P1
[8]  
Rainforth Tom, 2018, P MACHINE LEARNING R, V80
[9]  
Rasch G., 1960, STUDIES MATH PSYCHOL
[10]   A Review of Modern Computational Algorithms for Bayesian Optimal Design [J].
Ryan, Elizabeth G. ;
Drovandi, Christopher C. ;
McGree, James M. ;
Pettitt, Anthony N. .
INTERNATIONAL STATISTICAL REVIEW, 2016, 84 (01) :128-154