Response Demands of Reading Comprehension Test Items: A Review of Item Difficulty Modeling Studies

被引:2
|
作者
Ferrara, Steve [1 ]
Steedle, Jeffrey T. [2 ]
Frantz, Roger S. [3 ]
机构
[1] Cognia Learning Grp, 9115 Westside Pkwy, Alpharetta, GA 30009 USA
[2] ACT Inc, Res, Iowa City, IA USA
[3] NWEA, MetaMetr Res & Dev, Portland, OR USA
关键词
DOCUMENT LITERACY; PERFORMANCE; VARIABLES;
D O I
10.1080/08957347.2022.2103135
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Item difficulty modeling studies involve (a) hypothesizing item features, or item response demands, that are likely to predict item difficulty with some degree of accuracy; and (b) entering the features as independent variables into a regression equation or other statistical model to predict difficulty. In this review, we report findings from 13 empirical item difficulty modeling studies of reading comprehension tests. We define reading comprehension item response demands as reading passage variables (e.g., length, complexity), passage-by-item variables (e.g., degree of correspondence between item and text, type of information requested), and item stem and response option variables. We report on response demand variables that are related to item difficulty and illustrate how they can be used to manage item difficulty in construct-relevant ways so that empirical item difficulties are within a targeted range (e.g., located within the Proficient or other proficiency level range on a test's IRT scale, where intended).
引用
收藏
页码:237 / 253
页数:17
相关论文
共 50 条
  • [41] Using Differential Item Functioning to Test for Interrater Reliability in Constructed Response Items
    Walker, Cindy M.
    Sahin, Sakine Gocer
    EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 2020, 80 (04) : 808 - 820
  • [42] Not Read, but Nevertheless Solved? Three Experiments on PIRLS Multiple Choice Reading Comprehension Test Items
    Sparfeldt, Joern R.
    Kimmel, Rumena
    Loewenkamp, Lena
    Steingraeber, Antje
    Rost, Detlef H.
    EDUCATIONAL ASSESSMENT, 2012, 17 (04) : 214 - 232
  • [43] The cognitive equivalence of reading comprehension test items via computerized and paper-and-pencil administration
    Kobrin, JL
    Young, JW
    APPLIED MEASUREMENT IN EDUCATION, 2003, 16 (02) : 115 - 140
  • [44] A METAANALYTIC REVIEW OF ITEM DISCRIMINATION AND DIFFICULTY IN MULTIPLE-CHOICE ITEMS USING NONE-OF-THE-ABOVE
    KNOWLES, SL
    WELCH, CA
    EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1992, 52 (03) : 571 - 577
  • [45] Improving the Predictive Validity of Reading Comprehension Using Response Times of Correct Item Responses
    Su, Shiyang
    Davison, Mark L.
    APPLIED MEASUREMENT IN EDUCATION, 2019, 32 (02) : 166 - 182
  • [46] Item Response Theory Modeling for Examinee-selected Items with Rater Effect
    Liu, Chen-Wei
    Qiu, Xue-Lan
    Wang, Wen-Chung
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2019, 43 (06) : 435 - 448
  • [47] Item Response Theory Modeling of the Philadelphia Naming Test
    Fergadiotis, Gerasimos
    Kellough, Stacey
    Hula, William D.
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2015, 58 (03): : 865 - 877
  • [48] Item Response Theory Modeling of the Verb Naming Test
    Fergadiotis, Gerasimos
    Casilio, Marianne
    Dickey, Michael Walsh
    Steel, Stacey
    Nicholson, Hannele
    Fleegle, Mikala
    Swiderski, Alexander
    Hula, William D.
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2023, 66 (05): : 1718 - 1739
  • [49] Reading comprehension tests and poor readers: How test processing demands result in different profiles
    Papadopoulos, Timothy C.
    Kendeou, Panayiota
    Shiakalli, Maria
    ANNEE PSYCHOLOGIQUE, 2014, 114 (04): : 725 - 752
  • [50] Construction of a reading literacy test item bank for fourth graders based on item response theory
    Chen, Qishan
    Zheng, Haiyan
    Fan, Honglan
    Mo, Lei
    FRONTIERS IN PSYCHOLOGY, 2023, 14