Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy

被引:2
|
作者
Bharatha, Ambadasu [1 ,4 ]
Ojeh, Nkemcho [1 ]
Rabbi, Ahbab Mohammad Fazle [2 ]
Campbell, Michael H. [1 ]
Krishnamurthy, Kandamaran [1 ]
Layne-Yarde, Rhaheem N. A. [1 ]
Kumar, Alok [1 ]
Springer, Dale C. R. [1 ]
Connell, Kenneth L. [1 ]
Majumder, Md Anwarul Azim [1 ,3 ]
机构
[1] Univ West Indies, Fac Med Sci, Bridgetown, Barbados
[2] Univ Dhaka, Dept Populat Sci, Dhaka, Bangladesh
[3] Univ West Indies, Fac Med Sci, Med Educ, Cave Hill Campus, Bridgetown, Barbados
[4] Univ West Indies, Fac Med Sci, Pharmacol, Cave Hill Campus, Bridgetown, Barbados
来源
ADVANCES IN MEDICAL EDUCATION AND PRACTICE | 2024年 / 15卷
关键词
artificial intelligence; ChatGPT-4's; medical students; knowledge; interpretation abilities; multiple choice questions; EDUCATION;
D O I
10.2147/AMEP.S457408
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer -based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember -level" questions in preclinical and "evaluate -level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
引用
收藏
页码:393 / 400
页数:8
相关论文
共 16 条
  • [1] Comparing the performance of ChatGPT-3.5-Turbo, ChatGPT-4, and Google Bard with Iranian students in pre-internship comprehensive exams
    Zare, Soolmaz
    Vafaeian, Soheil
    Amini, Mitra
    Farhadi, Keyvan
    Vali, Mohammadreza
    Golestani, Ali
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [2] Evaluating the performance of ChatGPT-4 on the United Kingdom Medical Licensing Assessment
    Lai, U. Hin
    Wu, Keng Sam
    Hsu, Ting-Yu
    Kan, Jessie Kai Ching
    FRONTIERS IN MEDICINE, 2023, 10
  • [3] Performance of ChatGPT-4 in answering questions from the Brazilian National Examination for Medical Degree Revalidation
    Gobira, Mauro
    Nakayama, Luis Filipe
    Moreira, Rodrigo
    Andrade, Eric
    Regatieri, Caio Vinicius Saito
    Belfort Jr, Rubens
    REVISTA DA ASSOCIACAO MEDICA BRASILEIRA, 2023, 69 (10):
  • [4] Evaluating ChatGPT-4 in medical education: an assessment of subject exam performance reveals limitations in clinical curriculum support for students
    Mackey B.P.
    Garabet R.
    Maule L.
    Tadesse A.
    Cross J.
    Weingarten M.
    Discover Artificial Intelligence, 2024, 4 (01):
  • [5] Evaluating ChatGPT-4's performance on oral and maxillofacial queries: Chain of Thought and standard method
    Ji, Kaiyuan
    Wu, Zhihan
    Han, Jing
    Zhai, Guangtao
    Liu, Jiannan
    FRONTIERS IN ORAL HEALTH, 2025, 6
  • [6] Investigating ChatGPT-4's performance in solving physics problems and its potential implications for education
    Tong, Dazhen
    Tao, Yang
    Zhang, Kangkang
    Dong, Xinxin
    Hu, Yangyang
    Pan, Sudong
    Liu, Qiaoyi
    ASIA PACIFIC EDUCATION REVIEW, 2024, 25 (05) : 1379 - 1389
  • [7] Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study
    Watari, Takashi
    Takagi, Soshi
    Sakaguchi, Kota
    Nishizaki, Yuji
    Shimizu, Taro
    Yamamoto, Yu
    Tokuda, Yasuharu
    JMIR MEDICAL EDUCATION, 2023, 9
  • [8] Evaluating ChatGPT-4's Performance in Identifying Radiological Anatomy in FRCR Part 1 Examination Questions
    Sarangi, Pradosh Kumar
    Datta, Suvrankar
    Panda, Braja Behari
    Panda, Swaha
    Mondal, Himel
    INDIAN JOURNAL OF RADIOLOGY AND IMAGING, 2024,
  • [9] ChatGPT-4 Performance on German Continuing Medical Education-Friend or Foe (Trick or Treat)? Protocol for a Randomized Controlled Trial
    Burisch, Christian
    Bellary, Abhav
    Breuckmann, Frank
    Ehlers, Jan
    Thal, Serge C.
    Sellmann, Timur
    Godde, Daniel
    JMIR RESEARCH PROTOCOLS, 2025, 14
  • [10] ChatGPT-4 Performance on USMLE Step 1 Style Questions and Its Implications for Medical Education: A Comparative Study Across Systems and Disciplines
    Razmig Garabet
    Brendan P. Mackey
    James Cross
    Michael Weingarten
    Medical Science Educator, 2024, 34 : 145 - 152