Performance of ChatGPT and Bard in self-assessment questions for nephrology board renewal

被引:10
|
作者
Noda, Ryunosuke [1 ]
Izaki, Yuto [1 ]
Kitano, Fumiya [1 ]
Komatsu, Jun [1 ]
Ichikawa, Daisuke [1 ]
Shibagaki, Yugo [1 ]
机构
[1] St Marianna Univ, Dept Internal Med, Div Nephrol & Hypertens, Sch Med, 2-16-1 Sugao,Miyamae Ku, Kawasaki, Kanagawa 2168511, Japan
关键词
ChatGPT; GPT-4; Large language models; Artificial intelligence; Nephrology;
D O I
10.1007/s10157-023-02451-w
中图分类号
R5 [内科学]; R69 [泌尿科学(泌尿生殖系疾病)];
学科分类号
1002 ; 100201 ;
摘要
Background Large language models (LLMs) have impacted advances in artificial intelligence. While LLMs have demonstrated high performance in general medical examinations, their performance in specialized areas such as nephrology is unclear. This study aimed to evaluate ChatGPT and Bard in their potential nephrology applications. Methods Ninety-nine questions from the Self-Assessment Questions for Nephrology Board Renewal from 2018 to 2022 were presented to two versions of ChatGPT (GPT-3.5 and GPT-4) and Bard. We calculated the correct answer rates for the five years, each year, and question categories and checked whether they exceeded the pass criterion. The correct answer rates were compared with those of the nephrology residents. Results The overall correct answer rates for GPT-3.5, GPT-4, and Bard were 31.3% (31/99), 54.5% (54/99), and 32.3% (32/99), respectively, thus GPT-4 significantly outperformed GPT-3.5 (p < 0.01) and Bard (p < 0.01). GPT-4 passed in three years, barely meeting the minimum threshold in two. GPT-4 demonstrated significantly higher performance in problem-solving, clinical, and non-image questions than GPT-3.5 and Bard. GPT-4's performance was between third- and fourth-year nephrology residents. Conclusions GPT-4 outperformed GPT-3.5 and Bard and met the Nephrology Board renewal standards in specific years, albeit marginally. These results highlight LLMs' potential and limitations in nephrology. As LLMs advance, nephrologists should understand their performance for future applications.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 34 条
  • [31] The performance of ChatGPT versus neurosurgery residents in neurosurgical board examination-like questions: a systematic review and meta-analysis
    Bongco, Edgar Dominic A.
    Cua, Sean Kendrich N.
    Hernandez, Mary Angeline Luz U.
    Pascual, Juan Silvestre G.
    Khu, Kathleen Joy O.
    NEUROSURGICAL REVIEW, 2024, 47 (01)
  • [32] Evaluating ChatGPT as a self-learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Surapaneni, Krishna Mohan
    Rajajagadeesan, Anusha
    Goudhaman, Lakshmi
    Lakshmanan, Shalini
    Sundaramoorthi, Saranya
    Ravi, Dineshkumar
    Rajendiran, Kalaiselvi
    Swaminathan, Porchelvan
    BIOCHEMISTRY AND MOLECULAR BIOLOGY EDUCATION, 2024, 52 (02) : 237 - 248
  • [33] Performance of Chat Generative Pre-trained Transformer-4o in the Adult Clinical Cardiology Self-Assessment Program
    Malik, Abdulaziz
    Madias, Christopher
    Wessler, Benjamin S.
    EUROPEAN HEART JOURNAL - DIGITAL HEALTH, 2024, 6 (01): : 155 - 158
  • [34] Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology
    Taloni, Andrea
    Borselli, Massimiliano
    Scarsi, Valentina
    Rossi, Costanza
    Coco, Giulia
    Scorcia, Vincenzo
    Giannaccare, Giuseppe
    SCIENTIFIC REPORTS, 2023, 13 (01)