ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong SAR, Singapore, Ireland, and the United Kingdom)

被引:46
作者
Cheung, Billy Ho Hung [1 ]
Lau, Gary Kui Kai [1 ]
Wong, Gordon Tin Chun [1 ]
Lee, Elaine Yuen Phin [1 ]
Kulkarni, Dhananjay [2 ]
Seow, Choon Sheong [3 ]
Wong, Ruby [4 ]
Co, Michael Tiong-Hong [1 ]
机构
[1] Univ Hong Kong, LKS Fac Med, Hong Kong, Peoples R China
[2] Univ Edinburgh, Dept Surg, Edinburgh, Scotland
[3] Natl Univ Canc Inst Singapore, Dept Surg, Singapore, Singapore
[4] Univ Galway, Dept Surg, Galway, Ireland
来源
PLOS ONE | 2023年 / 18卷 / 08期
基金
英国科研创新办公室;
关键词
EDUCATION; QUALITY; DISTRACTORS; TESTS;
D O I
10.1371/journal.pone.0290691
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
IntroductionLarge language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.Methods50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.ResultsThe total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.ConclusionChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.
引用
收藏
页数:12
相关论文
共 34 条
  • [11] A review of multiple-choice item-writing guidelines for classroom assessment
    Haladyna, TM
    Downing, SM
    Rodriguez, MC
    [J]. APPLIED MEASUREMENT IN EDUCATION, 2002, 15 (03) : 309 - 334
  • [12] Heaven William, 2023, MIT Technology Review
  • [13] The impact of site-specific digital histology signatures on deep learning model accuracy and bias
    Howard, Frederick M.
    Dolezal, James
    Kochanny, Sara
    Schulte, Jefree
    Chen, Heather
    Heij, Lara
    Huo, Dezheng
    Nanda, Rita
    Olopade, Olufunmilayo I.
    Kather, Jakob N.
    Cipriani, Nicole
    Grossman, Robert L.
    Pearson, Alexander T.
    [J]. NATURE COMMUNICATIONS, 2021, 12 (01)
  • [14] Evaluation of the quality of multiple-choice questions according to the students' academic level
    Inarrairaegui, Mercedes
    Fernandez-Ros, Nerea
    Lucena, Felipe
    Landecho, Manuel F.
    Garcia, Nicolas
    Quiroga, Jorge
    Ignacio Herrero, Jose
    [J]. BMC MEDICAL EDUCATION, 2022, 22 (01)
  • [15] Machine learning: Trends, perspectives, and prospects
    Jordan, M. I.
    Mitchell, T. M.
    [J]. SCIENCE, 2015, 349 (6245) : 255 - 260
  • [16] The future of medical education
    Khay-Guan, Yeoh
    [J]. SINGAPORE MEDICAL JOURNAL, 2019, 60 (01) : 3 - 8
  • [17] An investigation into the optimal number of distractors in single-best answer exams
    Kilgour, James M.
    Tayyaba, Saadia
    [J]. ADVANCES IN HEALTH SCIENCES EDUCATION, 2016, 21 (03) : 571 - 585
  • [18] Kumar Dharmendra, 2021, Med J Armed Forces India, V77, pS85, DOI 10.1016/j.mjafi.2020.11.007
  • [19] Kung Tiffany H, 2023, PLOS Digit Health, V2, pe0000198, DOI 10.1371/journal.pdig.0000198
  • [20] Loscalzo J., 2022, Harrisons Principles of Internal Medicine, V21st