The Utilization of ChatGPT in Reshaping Future Medical Education and Learning Perspectives: A Curse or a Blessing?

被引:12
作者
Breeding, Tessa [1 ]
Martinez, Brian [1 ]
Patel, Heli [1 ]
Nasef, Hazem [1 ]
Arif, Hasan [1 ]
Nakayama, Don [2 ,3 ]
Elkbuli, Adel [4 ,5 ]
机构
[1] NOVA Southeastern Univ, Kiran Patel Coll Allopath Med, Ft Lauderdale, FL USA
[2] Mercer Univ, Sch Med, Columbus, GA USA
[3] Piedmont Columbus Reg Hosp, Dept Pediat Surg, Piedmont, GA USA
[4] Orlando Reg Med Ctr Inc, Dept Surg, Div Trauma & Surg Crit Care, 86 W Underwood St, Orlando, FL 32806 USA
[5] Orlando Reg Med Ctr Inc, Dept Surg Educ, Orlando, FL 32806 USA
关键词
ChatGPT; medical education; medical students; laypeople; common surgical conditions;
D O I
10.1177/00031348231180950
中图分类号
R61 [外科手术学];
学科分类号
摘要
Background ChatGPT has substantial potential to revolutionize medical education. We aim to assess how medical students and laypeople evaluate information produced by ChatGPT compared to an evidence-based resource on the diagnosis and management of 5 common surgical conditions. Methods A 60-question anonymous online survey was distributed to third- and fourth-year U.S. medical students and laypeople to evaluate articles produced by ChatGPT and an evidence-based source on clarity, relevance, reliability, validity, organization, and comprehensiveness. Participants received 2 blinded articles, 1 from each source, for each surgical condition. Paired-sample t-tests were used to compare ratings between the 2 sources. Results Of 56 survey participants, 50.9% (n = 28) were U.S. medical students and 49.1% (n = 27) were from the general population. Medical students reported that ChatGPT articles displayed significantly more clarity (appendicitis: 4.39 vs 3.89, P = .020; diverticulitis: 4.54 vs 3.68, P < .001; SBO 4.43 vs 3.79, P = .003; GI bleed: 4.36 vs 3.93, P = .020) and better organization (diverticulitis: 4.36 vs 3.68, P = .021; SBO: 4.39 vs 3.82, P = .033) than the evidence-based source. However, for all 5 conditions, medical students found evidence-based passages to be more comprehensive than ChatGPT articles (cholecystitis: 4.04 vs 3.36, P = .009; appendicitis: 4.07 vs 3.36, P = .015; diverticulitis: 4.07 vs 3.36, P = .015; small bowel obstruction: 4.11 vs 3.54, P = .030; upper GI bleed: 4.11 vs 3.29, P = .003). Conclusion Medical students perceived ChatGPT articles to be clearer and better organized than evidence-based sources on the pathogenesis, diagnosis, and management of 5 common surgical pathologies. However, evidence-based articles were rated as significantly more comprehensive.
引用
收藏
页码:560 / 566
页数:7
相关论文
共 24 条
  • [1] The impact of artificial intelligence in medicine on the future role of the physician
    Ahuja, Abhimanyu S.
    [J]. PEERJ, 2019, 7
  • [2] AlAfnan M. A., 2023, Journal of Artificial Intelligence and Technology, V3, P60, DOI [DOI 10.37965/JAIT.2023.0184, https://doi.org/10.37965/jait.2023.0184]
  • [3] The future of medical education and research: Is ChatGPT a blessing or blight in disguise?
    Arif, Taha Bin
    Munaf, Uzair
    Ul-Haque, Ibtehaj
    [J]. MEDICAL EDUCATION ONLINE, 2023, 28 (01):
  • [4] Chatting and cheating: Ensuring academic integrity in the era of ChatGPT
    Cotton, Debby R. E.
    Cotton, Peter A. A.
    Shipway, J. Reuben
    [J]. INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL, 2024, 61 (02) : 228 - 239
  • [5] Doherty G.M., 2014, CURRENT Diagnosis Treatment: Surgery, V14e
  • [6] Eysenbach G, 2023, JMIR MED EDUC, V9, DOI 10.2196/46885
  • [7] Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers
    Gao, Catherine A.
    Howard, Frederick M.
    Markov, Nikolay S.
    Dyer, Emma C.
    Ramesh, Siddhi
    Luo, Yuan
    Pearson, Alexander T.
    [J]. NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [8] Gilson Aidan, 2023, JMIR Med Educ, V9, pe45312, DOI 10.2196/45312
  • [9] Hu K., 2023, Reuters Web site
  • [10] General surgeon involvement in the care of patients designated with an American Association for the Surgery of Trauma-endorsed ICD-10-CM emergency general surgery diagnosis code in Wisconsin
    Ingraham, Angela
    Schumacher, Jessica
    Fernandes-Taylor, Sara
    Yang, Dou-Yan
    Godat, Laura
    Smith, Alan
    Barbosa, Ronald
    Cribari, Chris
    Salim, Ali
    Schroeppel, Thomas
    Staudenmayer, Kristan
    Crandall, Marie
    Utter, Garth
    Assessment, Aast Committee On Patient
    [J]. JOURNAL OF TRAUMA AND ACUTE CARE SURGERY, 2022, 92 (01) : 117 - 125