Examining the Instructional Sensitivity of Constructed-Response Achievement Test Item Scores

被引:0
|
作者
Traynor, Anne [1 ]
Li, Cheng-Hsien [2 ]
Zhou, Shuqi [3 ]
机构
[1] Purdue Univ, W Lafayette, IN USA
[2] Natl Sun Yat Sen Univ, Kaohsiung, Taiwan
[3] Donghua Univ, Shanghai, Peoples R China
关键词
instructional sensitivity; validity; test content; ASSESSMENTS; TAXONOMY;
D O I
10.1177/00131644241313212
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
Inferences about student learning from large-scale achievement test scores are fundamental in education. For achievement test scores to provide useful information about student learning progress, differences in the content of instruction (i.e., the implemented curriculum) should affect test-takers' item responses. Existing research has begun to identify patterns in the content of instructionally sensitive multiple-choice achievement test items. To inform future test design decisions, this study identified instructionally (in)sensitive constructed-response achievement items, then characterized features of those items and their corresponding scoring rubrics. First, we used simulation to evaluate an item step difficulty difference index for constructed-response test items, derived from the generalized partial credit model. The statistical performance of the index was adequate, so we then applied it to data from 32 constructed-response eighth-grade science test items. We found that the instructional sensitivity (IS) index values varied appreciably across the category boundaries within an item as well as across items. Content analysis by master science teachers allowed us to identify general features of item score categories that show high, or negligible, IS.
引用
收藏
页数:32
相关论文
共 49 条