Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing

被引:0
作者
Filipe Manuel Vidal Falcão
Daniela S.M. Pereira
José Miguel Pêgo
Patrício Costa
机构
[1] University of Minho,Life and Health Sciences Research Institute (ICVS), School of Medicine
[2] ICVS/3B’s,PT Government Associate Laboratory
[3] iCognitus4All – IT Solutions,undefined
来源
Education and Information Technologies | 2024年 / 29卷
关键词
Item development; Automatic item generation; Progress testing; Medical education; Item response theory; Hierarchical Linear Models;
D O I
暂无
中图分类号
学科分类号
摘要
Progress tests (PT) are a popular type of longitudinal assessment used for evaluating clinical knowledge retention and long-life learning in health professions education. Most PTs consist of multiple-choice questions (MCQs) whose development is costly and time-consuming. Automatic Item Generation (AIG) generates test items through algorithms, promising to ease this burden. However, it remains unclear how AIG-items behave in formative assessment (FA) modalities such as PTs compared to manually written items. The purpose of this study was to compare the quality and validity of AIG-items versus manually written items. Responses to 126 (23 automatically generated) dichotomously scored single best-answer five-option MCQs retrieved from the 2021 University of Minho PT of medicine were analyzed. Procedures based on item response theory (IRT), dimensionality testing, item fit, reliability, differential item functioning (DIF) and distractor analysis were used. Qualitative assessment was conducted through expert review. Validity evidence of AIG-items was assessed by using hierarchical linear modeling (HLM). The PT proved to be a viable tool for assessing medical students cognitive competencies. AIG-items were parallel to manually written-items, presenting similar indices of difficulty and information. The proportion of functional distractors for both AIG and manually written items was similar. Evidence of validity for AIG-items was found while showing higher levels of item quality. AIG-items functioned as intended and were appropriate for evaluating medical students at various levels of the knowledge spectrum.
引用
收藏
页码:4505 / 4530
页数:25
相关论文
共 162 条
[1]  
Albanese M(2016)Progress testing: Critical analysis and suggested practices Advances in Health Sciences Education 21 221-234
[2]  
Case SM(2012)Using automatic item generation to meet the increasing item demands of high-stakes educational and occupational assessment Learning and Individual Differences 22 112-117
[3]  
Arendasy ME(2020)Ensuring content validity of Psychological and Educational tests–the role of experts Frontline Learning Research 8 1-37
[4]  
Sommer M(2015)Medical student web-based formative assessment tool for renal pathology Medical Education Online 20 1-7
[5]  
Beck K(2009)Developing the theory of formative assessment Educational Assessment Evaluation and Accountability 21 5-31
[6]  
Bijol V(2007)Chi-squared and Fisher–Irwin tests of two‐by‐two tables with small sample recommendations Statistics in Medicine 26 3661-3675
[7]  
Byrne-dugan CJ(2019)Computerized item modeling Practices using computer adaptive formative Assessment Automatic Item Generation System: A Tutorial The Quantitative Methods for Psychology 15 214-225
[8]  
Hoenig MP(2018)Evaluation of Automatic Item Generation Utilities in Formative Assessment Application for Korean High School Students Journal of Educational Issues 4 68-89
[9]  
Black P(2004)Learning to give feedback in medical education The Obstetrician & Gynaecologist 6 243-247
[10]  
Wiliam D(2018)Progress on a New Kind of Progress Test: Assessing medical students’ clinical skills Academic Medicine 93 724-728