Minimization of a Short Computer-Based Test in Reading

被引:12
作者
Schurig, Michael [1 ]
Jungjohann, Jana [2 ]
Gebhardt, Markus [2 ]
机构
[1] TU Dortmund Univ, Fac Rehabil Sci, Dortmund, Germany
[2] Univ Regensburg, Fac Human Sci, Regensburg, Germany
关键词
learning progress monitoring; test minimization; test length; computer-based testing; test development; item reduction; CURRICULUM-BASED MEASUREMENT; STUDENTS; MODEL; MAZE; RELIABILITY; IMPACT;
D O I
10.3389/feduc.2021.684595
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Formative tests and assessments have high potential in supporting learning, especially for students with special educational needs. One approach to gain assessment information on student learning is to monitor learning progress. For that, multiple repeated tests are often performed by practitioners. In order to be useful in practice, tests must meet various interdependent quality criteria. A property of tests that touches various criteria as the utility and economy is the length. A test has to be long enough to give a meaningful, reliable and comparable measure but short enough to be usable in classroom situations. An approach to evaluate and minimize the length of a computer-based test on sentence comprehension is introduced. It is shown that the test can be shortened from eight to 5 min while the estimation of the student's abilities remains relatively stable for a random item order and a fixed item order variant. The consequences of test development of progress monitoring and the procedure for test time reduction for the different quality criteria are outlined. An approach to evaluate and minimize the length of a computer-based test by using a one parameter logistic model on a test of sentence comprehension (N = 761) is introduced. The data and the syntax is published in the OSF project https://osf.io/hnbs8/.
引用
收藏
页数:12
相关论文
共 74 条
[1]  
Adams R.J., 2005, Studies in Educational Evaluation, V31, P162, DOI DOI 10.1016/J.STUEDUC.2005.05.008
[2]  
American Educational Research Association American Psychological Association & National Council on Measurement in Education, 2014, STAND ED PSYCH TESTS
[3]   GOODNESS OF FIT TEST FOR RASCH MODEL [J].
ANDERSEN, EB .
PSYCHOMETRIKA, 1973, 38 (01) :123-140
[4]   Exploring the Robustness of a Unidimensional Item Response Theory Model With Empirically Multidimensional Data [J].
Anderson, Daniel ;
Kahn, Joshua D. ;
Tindal, Gerald .
APPLIED MEASUREMENT IN EDUCATION, 2017, 30 (03) :163-177
[5]  
Anderson S., 2020, Zeitschrift fur Geomorphologie, V13, P151, DOI [10.1007/s42278-019-00072-5, DOI 10.1007/S42278-019-00072-5]
[6]  
[Anonymous], 1980, PROBABILISTIC MODELS
[7]  
[Anonymous], 2015, RATSWD WORKING PAPER, V245
[8]  
[Anonymous], 2012, Applications of item response theory to practical testing problems
[9]  
[Anonymous], 1969, ED EVALUATION NEW RO
[10]  
Ardoin SP, 2004, SCHOOL PSYCHOL REV, V33, P218