To Compare the Efficiency of ChatGPT and Bard in Medical Education: An Analysis of MCQ-Based Learning and Assessment

被引:0
作者
Husain, Sharjeel [1 ,2 ]
Shahid, Sabaa [3 ,4 ]
Ansari, Zaid [1 ,2 ]
Ayoob, Tahera [4 ,5 ]
Hussain, Azhar [1 ,2 ]
Mujahid, Rimsha [1 ,2 ]
机构
[1] Liaquat Coll Med & Dent, Dept Internal Med, Karachi, Pakistan
[2] Darul Sehat Hosp, Karachi, Pakistan
[3] Liaquat Coll Med & Dent, DHPE Dept Hlth Profess & Educ, Karachi, Pakistan
[4] Qamar Dent Hosp, Karachi, Pakistan
[5] Liaquat Coll Med & Dent, Dept Oral Surg, Karachi, Pakistan
来源
ANNALS ABBASI SHAHEED HOSPITAL & KARACHI MEDICAL & DENTAL COLLEGE | 2024年 / 29卷 / 01期
关键词
Artificial intelligence; multiple choice question; medical education;
D O I
暂无
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Objective: This study aimed to compare the efficacy of ChatGPT and Google Bard as virtual tutors in supporting students across various levels of cognition in MCQ-based assessments in the field of Internal Medicine. Methods: This cross-sectional study was conducted in the Department of Internal Medicine in collaboration with the Department of postgraduate medical education from June 2023 to October 2023. A comprehensive collection of multiple-choice questions (MCQs) covering various aspects of Internal Medicine was compiled by the research team's consensus. The items were systematically organized into chapters and further categorized based on cognitive complexity levels (C1, C2, and C3). The chosen MCQs were entered into separate sessions of both ChatGPT and Google Bard. The responses from each Artificial Intelligence platform were then compared with the corresponding answers in the designated MCQs book. Recorded responses were classified as accurate, inaccurate, or partially accurate. Results: The ChatGPT exhibited an overall success rate of 64%, providing 199 correct responses out of 307 queries, of which 10 were partially correct. By contrast, Google Bard achieved an overall success rate of 58.95 %, yielding 181 correct responses out of 307 queries, where 16 were partially correct. When stratified by cognitive complexity levels, ChatGPT demonstrated proficiency in solving C2 MCQs at a rate of 80%, whereas the performance rates for the C1 and C3 categories were 69% and 54%, respectively. In contrast, Google Bard displayed a 33% success rate in solving C2 MCQs while achieving success rates of 95% and 53% in the C1 and C3 categories, respectively. Conclusion: The findings of this study suggest that ChatGPT is a more advantageous tool for students and medical educators than Google Bard. These discerned advantages underscore the potential of ChatGPT to enhance the educational experience within the medical domain.
引用
收藏
页码:56 / 63
页数:8
相关论文
共 22 条
[1]   Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology [J].
Agarwal, Mayank ;
Sharma, Priyanka ;
Goswami, Ayan .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (06)
[2]   A Chatbot Versus Physicians to Provide Information for Patients With Breast Cancer: Blind, Randomized Controlled Noninferiority Trial [J].
Bibault, Jean-Emmanuel ;
Chaix, Benjamin ;
Guillemasse, Arthur ;
Cousin, Sophie ;
Escande, Alexandre ;
Perrin, Morgane ;
Pienkowski, Arthur ;
Delamon, Guillaume ;
Nectoux, Pierre ;
Brouard, Benoit .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2019, 21 (11)
[3]  
Chan Kai Siang, 2019, JMIR Med Educ, V5, pe13930, DOI 10.2196/13930
[4]   Artificial Intelligence in Education: A Review [J].
Chen, Lijia ;
Chen, Pingping ;
Lin, Zhijian .
IEEE ACCESS, 2020, 8 (08) :75264-75278
[6]   Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology [J].
Davis, Ryan ;
Eppler, Michael ;
Ayo-Ajibola, Oluwatobiloba ;
Loh-Doyle, Jeffrey C. ;
Nabhani, Jamal ;
Samplaski, Mary ;
Gill, Inderbir ;
Cacciamani, Giovanni E. .
JOURNAL OF UROLOGY, 2023, 210 (04) :688-694
[7]  
de Souza LL, 2023, J Med Artif Intell, V6, P19, DOI [10.21037/jmai-23-70, DOI 10.21037/JMAI-23-70]
[8]   A Step-By-Step Approach for Creating Good Multiple-Choice Questions [J].
DiSantis, David J. .
CANADIAN ASSOCIATION OF RADIOLOGISTS JOURNAL-JOURNAL DE L ASSOCIATION CANADIENNE DES RADIOLOGISTES, 2020, 71 (02) :131-133
[9]   Assessment of Global Health Education: The Role of Multiple-Choice Questions [J].
Douthit, Nathan T. ;
Norcini, John ;
Mazuz, Keren ;
Alkan, Michael ;
Feuerstein, Marie-Therese ;
Clarfield, A. Mark ;
Dwolatzky, Tzvi ;
Solomonov, Evgeny ;
Waksman, Igor ;
Biswas, Seema .
FRONTIERS IN PUBLIC HEALTH, 2021, 9
[10]  
Farhat F, 2023, EVALUATING POTENTIAL, P6, DOI 10.2196/preprints.51523