Evaluation of the Item Analysis of Multiple-Choice Pediatric Exams: A College of Medicine Departmental Review

被引:0
|
作者
Mustafa, Alam Eldin M. [1 ]
机构
[1] King Khalid Univ, Dept Child Hlth, Abha, Saudi Arabia
关键词
Item analysis; multiple choice questions; Difficulty index; Discrimination index; Distractors; Non- functional distractor; Reliability;
D O I
暂无
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Objectives: To evaluate the detailed indices of the analysis of the items of the written multiple-choice questions (MCQs) exams in the Department of Child Health, Faculty of Medicine over the last four academic years (1439-1443 AH) and to construct the outlines of a plan for improving the upcoming written MCQs exams. Methods: This was retrospective cross-sectional study on the item analysis of the exams in the MBBS course of pediatrics-2 for both boys and girls group in the Child Health Department, Faculty of Medicine, King Khalid University, Saudi Arabia for the midterm MCQs exams in the years of 1439,1440,1441,1442 and the first semester of girl group in 1443. The total number of items studied in these 16 exams were 643 items. The data was obtained constitute the difficulty, discrimination, point biserial reliability and distractor analysis of each of the exams items. The data was tabulated and the statistical significance determined for some variables in the analysis. Results: Total number of students enrolled in the study were 1002.The total number of items studied were 643 items. Regarding students' scores were as follows: A scored by 73 students (7.3%), B by 219 (21.8%), C by 331 (33%), D by 214 (21.4%) and F by 165 (16.5%). Difficulty index: considering a difficulty index of 80% as easy item of 30% or less as difficult item and that between 30% and 80% of moderate difficult; we obtain 3 categories of items: difficult items were 43 (6.6%) of the total items, moderate difficulty items were 343 (53.4%) and easy items 257 (40%). There was significant statistical correlation (p <= 0.05) when these difficulty levels compared over the exam years. Conclusions: Departmental exam committee needs to work comprehensively to improve the difficulty of the exams towards moderate intermediate class also the quality of the questions need extensive work on refining the distractors and revision of the correctness and the suitability of a considerable number of items.
引用
收藏
页码:2363 / 2369
页数:7
相关论文
共 50 条
  • [41] Item Analysis of Single Best Response Type Multiple Choice Questions for Formative Assessment in Obstetrics and Gynaecology
    Kulshreshtha, Shabdika
    Gupta, Ganesh
    Goyal, Gourav
    Gupta, Kalika
    Davda, Kush
    JOURNAL OF OBSTETRICS AND GYNECOLOGY OF INDIA, 2024, 74 (03) : 256 - 264
  • [42] Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University
    Tomak, L.
    Bek, Y.
    NIGERIAN JOURNAL OF CLINICAL PRACTICE, 2015, 18 (03) : 387 - 394
  • [43] Item Analysis of a multiple-choice reading test in the Italian certification for foreign speakers CILS (level B1; summer session 2012)
    Torresan, Paolo
    CALIGRAMA-REVISTA DE ESTUDOS ROMANICOS, 2014, 19 (02): : 17 - 33
  • [44] An investigation of enhancement of ability evaluation by using a nested logit model for multiple-choice items
    Tour, Liu
    Wang Mengcheng
    Xin Tao
    ANALES DE PSICOLOGIA, 2017, 33 (03): : 530 - 537
  • [45] Does developing multiple-choice Questions Improve Medical Students' Learning? A Systematic Review
    Touissi, Youness
    Hjiej, Ghita
    Hajjioui, Abderrazak
    Ibrahimi, Azeddine
    Fourtassi, Maryam
    MEDICAL EDUCATION ONLINE, 2022, 27 (01):
  • [46] ChatGPT for generating multiple-choice questions: Evidence on the use of artificial intelligence in automatic item generation for a rational pharmacotherapy exam
    Kiyak, Yavuz Selim
    Coskun, Ozlem
    Budakoglu, Isil Irem
    Uluoglu, Canan
    EUROPEAN JOURNAL OF CLINICAL PHARMACOLOGY, 2024, 80 (05) : 729 - 735
  • [47] Using Learning Analytics to evaluate the quality of multiple-choice questions A perspective with Classical Test Theory and Item Response Theory
    Azevedo, Jose Manuel
    Oliveira, Ema P.
    Beites, Patricia Damas
    INTERNATIONAL JOURNAL OF INFORMATION AND LEARNING TECHNOLOGY, 2019, 36 (04) : 322 - 341
  • [48] ChatGPT for generating multiple-choice questions: Evidence on the use of artificial intelligence in automatic item generation for a rational pharmacotherapy exam
    Yavuz Selim Kıyak
    Özlem Coşkun
    Işıl İrem Budakoğlu
    Canan Uluoğlu
    European Journal of Clinical Pharmacology, 2024, 80 : 729 - 735
  • [49] ASSESSING INTER-RATER AGREEMENT ABOUT ITEM-WRITING FLAWS IN MULTIPLE-CHOICE QUESTIONS OF CLINICAL ANATOMY
    Guimaraes, B.
    Pais, J.
    Coelho, E.
    Silva, A.
    Povo, A.
    Lourinho, I.
    Severo, M.
    Ferreira, M. A.
    EDULEARN13: 5TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2013, : 5921 - 5924
  • [50] Will a Short Training Session Improve Multiple-Choice Item-Writing Quality by Dental School Faculty? A Pilot Study
    Dellinges, Mark A.
    Curtis, Donald A.
    JOURNAL OF DENTAL EDUCATION, 2017, 81 (08) : 948 - 955