AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination

被引:0
作者
Law, Alex K. K. [1 ,3 ]
So, Jerome [2 ]
Lui, Chun Tat [3 ]
Choi, Yu Fai [3 ]
Cheung, Koon Ho [3 ]
Hung, Kevin Kei-ching [1 ]
Graham, Colin Alexander [1 ,3 ]
机构
[1] Chinese Univ Hong Kong CUHK, Accid & Emergency Med Acad Unit AEMAU, Shatin, 2nd Floor,Main Clin Block & Trauma Ctr,Prince Wale, Hong Kong, Peoples R China
[2] Tseung Kwan O Hosp, Dept Accid & Emergency, Hong Kong, Peoples R China
[3] Hong Kong Coll Emergency Med, Hong Kong, Peoples R China
关键词
Artificial intelligence; Educational measurement; Multiple choice questions; Medical education; Cognitive processes;
D O I
10.1186/s12909-025-06796-6
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Background The creation of high-quality multiple-choice questions (MCQs) is essential for medical education assessments but is resource-intensive and time-consuming when done by human experts. Large language models (LLMs) like ChatGPT-4o offer a promising alternative, but their efficacy remains unclear, particularly in high-stakes exams. Objective This study aimed to evaluate the quality and psychometric properties of ChatGPT-4o-generated MCQs compared to human-created MCQs in a high-stakes medical licensing exam. Methods A prospective cohort study was conducted among medical doctors preparing for the Primary Examination on Emergency Medicine (PEEM) organised by the Hong Kong College of Emergency Medicine in August 2024. Participants attempted two sets of 100 MCQs-one AI-generated and one human-generated. Expert reviewers assessed MCQs for factual correctness, relevance, difficulty, alignment with Bloom's taxonomy (remember, understand, apply and analyse), and item writing flaws. Psychometric analyses were performed, including difficulty and discrimination indices and KR-20 reliability. Candidate performance and time efficiency were also evaluated. Results Among 24 participants, AI-generated MCQs were easier (mean difficulty index = 0.78 +/- 0.22 vs. 0.69 +/- 0.23, p < 0.01) but showed similar discrimination indices to human MCQs (mean = 0.22 +/- 0.23 vs. 0.26 +/- 0.26). Agreement was moderate (ICC = 0.62, p = 0.01, 95% CI: 0.12-0.84). Expert reviews identified more factual inaccuracies (6% vs. 4%), irrelevance (6% vs. 0%), and inappropriate difficulty levels (14% vs. 1%) in AI MCQs. AI questions primarily tested lower-order cognitive skills, while human MCQs better assessed higher-order skills (chi(2) = 14.27, p = 0.003). AI significantly reduced time spent on question generation (24.5 vs. 96 person-hours). Conclusion ChatGPT-4o demonstrates the potential for efficiently generating MCQs but lacks the depth needed for complex assessments. Human review remains essential to ensure quality. Combining AI efficiency with expert oversight could optimise question creation for high-stakes exams, offering a scalable model for medical education that balances time efficiency and content quality.
引用
收藏
页数:9
相关论文
共 33 条
  • [1] Large language models for generating medical examinations: systematic review
    Artsi, Yaara
    Sorin, Vera
    Konen, Eli
    Glicksberg, Benjamin S.
    Nadkarni, Girish
    Klang, Eyal
    [J]. BMC MEDICAL EDUCATION, 2024, 24 (01)
  • [2] Exploring the Potential and Limitations of Chat Generative Pre-trained Transformer (ChatGPT) in Generating Board-Style Dermatology Questions: A Qualitative Analysis
    Ayub, Ibraheim
    Hamann, Dathan
    Hamann, Carsten R.
    Davis, Matthew J.
    [J]. CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (08)
  • [3] Brown TB, 2020, ADV NEUR IN, V33
  • [4] ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong SAR, Singapore, Ireland, and the United Kingdom)
    Cheung, Billy Ho Hung
    Lau, Gary Kui Kai
    Wong, Gordon Tin Chun
    Lee, Elaine Yuen Phin
    Kulkarni, Dhananjay
    Seow, Choon Sheong
    Wong, Ruby
    Co, Michael Tiong-Hong
    [J]. PLOS ONE, 2023, 18 (08):
  • [5] Response to: "ChatGPT for assessment writing"
    Doggett, Thomas
    Warr, Harriet
    Johnson, Jo-Anne
    Cork, Simon
    [J]. MEDICAL TEACHER, 2024, 46 (06) : 857 - 858
  • [6] G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences
    Faul, Franz
    Erdfelder, Edgar
    Lang, Albert-Georg
    Buchner, Axel
    [J]. BEHAVIOR RESEARCH METHODS, 2007, 39 (02) : 175 - 191
  • [7] Gichoya JW, 2023, British Journal of Radiology, V96
  • [8] Twelve tips to leverage AI for efficient and effective medical question generation: A guide for educators using Chat GPT
    Indran, Inthrani Raja
    Paramanathan, Priya
    Gupta, Neelima
    Mustafa, Nurulhuda
    [J]. MEDICAL TEACHER, 2024, 46 (08) : 1021 - 1026
  • [9] Barriers and facilitators to writing quality items for medical school assessments - a scoping review
    Karthikeyan, Sowmiya
    O'Connor, Elizabeth
    Hu, Wendy
    [J]. BMC MEDICAL EDUCATION, 2019, 19 (1)
  • [10] From GPT-3.5 to GPT-4.o: A Leap in AI's Medical Exam Performance
    Kipp, Markus
    [J]. INFORMATION, 2024, 15 (09)