Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study

被引:13
作者
Watari, Takashi [1 ,2 ,3 ]
Takagi, Soshi [4 ]
Sakaguchi, Kota [1 ]
Nishizaki, Yuji [5 ]
Shimizu, Taro [6 ]
Yamamoto, Yu [7 ]
Tokuda, Yasuharu [8 ]
机构
[1] Shimane Univ Hosp, Gen Med Ctr, Izumo, Japan
[2] Univ Michigan, Dept Med, Med Sch, 2215 Fuller Rd, Ann Arbor, MI 48105 USA
[3] VA Ann Arbor Healthcare Syst, Med Serv, Ann Arbor, MI USA
[4] Shimane Univ, Fac Med, Izumo, Japan
[5] Juntendo Univ, Sch Med, Div Med Educ, Tokyo, Japan
[6] Dokkyo Med Univ Hosp, Dept Diagnost & Generalist Med, Tochigi, Japan
[7] Jichi Med Univ, Ctr Community Med, Div Gen Med, Tochigi, Japan
[8] Muribushi Okinawa Project Teaching Hosp, Okinawa, Japan
来源
JMIR MEDICAL EDUCATION | 2023年 / 9卷
关键词
ChatGPT; artificial intelligence; medical education; clinical training; non-English language; ChatGPT-4; Japan; Japanese; Asia; Asian; exam; examination; exams; examinations; NLP; natural language processing; LLM; language model; language models; performance; response; responses; answer; answers; chatbot; chatbots; conversational agent; conversational agents; reasoning; clinical; GM-ITE; self-assessment; residency programs;
D O I
10.2196/52202
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Background: The reliability of GPT-4, a state-of-the-art expansive language model specializing in clinical reasoning and medical knowledge, remains largely unverified across non-English languages. Objective: This study aims to compare fundamental clinical competencies between Japanese residents and GPT-4 by using the General Medicine In-Training Examination (GM-ITE).Methods: We used the GPT-4 model provided by OpenAI and the GM-ITE examination questions for the years 2020, 2021, and 2022 to conduct a comparative analysis. This analysis focused on evaluating the performance of individuals who were concluding their second year of residency in comparison to that of GPT-4. Given the current abilities of GPT-4, our study included only single-choice exam questions, excluding those involving audio, video, or image data. The assessment included 4 categories: general theory (professionalism and medical interviewing), symptomatology and clinical reasoning, physical examinations and clinical procedures, and specific diseases. Additionally, we categorized the questions into 7 specialty fields and 3 levels of difficulty, which were determined based on residents' correct response rates.Results: Upon examination of 137 GM-ITE questions in Japanese, GPT-4 scores were significantly higher than the mean scores of residents (residents: 55.8%, GPT-4: 70.1%; P<.001). In terms of specific disciplines, GPT-4 scored 23.5 points higher in the "specific diseases," 30.9 points higher in "obstetrics and gynecology," and 26.1 points higher in "internal medicine." In contrast, GPT-4 scores in "medical interviewing and professionalism," "general practice," and "psychiatry" were lower than those of the residents, although this discrepancy was not statistically significant. Upon analyzing scores based on question difficulty, GPT-4 scores were 17.2 points lower for easy problems (P=.007) but were 25.4 and 24.4 points higher for normal and difficult problems, respectively (P<.001). In year-on-year comparisons, GPT-4 scores were 21.7 and 21.5 points higher in the 2020 (P=.01) and 2022 (P=.003) examinations, respectively, but only 3.5 points higher in the 2021 examinations (no significant difference). Conclusions: In the Japanese language, GPT-4 also outperformed the average medical residents in the GM-ITE test, originally designed for them. Specifically, GPT-4 demonstrated a tendency to score higher on difficult questions with low resident correct response rates and those demanding a more comprehensive understanding of diseases. However, GPT-4 scored comparatively lower on questions that residents could readily answer, such as those testing attitudes toward patients and professionalism, as well as those necessitating an understanding of context and communication. These findings highlight the strengths and limitations of artificial intelligence applications in medical education and practice.
引用
收藏
页数:8
相关论文
共 30 条
  • [1] [Anonymous], 2020, Objectives, strategies, and evaluation in residency training
  • [2] Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
    Athaluri, Sai Anirudh
    Manthena, Sandeep Varma
    Kesapragada, V. S. R. Krishna Manoj
    Yarlagadda, Vineel
    Dave, Tirth
    Duddumpudi, Rama Tulasi Siri
    [J]. CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (04)
  • [3] Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
    Ayers, John W.
    Poliak, Adam
    Dredze, Mark
    Leas, Eric C.
    Zhu, Zechariah
    Kelley, Jessica B.
    Faix, Dennis J.
    Goodman, Aaron M.
    Longhurst, Christopher A.
    Hogarth, Michael
    Smith, Davey M.
    [J]. JAMA INTERNAL MEDICINE, 2023, 183 (06) : 589 - 596
  • [4] Bommarito J, PREPRINT
  • [5] Bommarito MJ, 2022, SSRN Journal
  • [6] Artificial Intelligence in Medicine: Today and Tomorrow
    Briganti, Giovanni
    Le Moine, Olivier
    [J]. FRONTIERS IN MEDICINE, 2020, 7
  • [7] ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health
    De Angelis, Luigi
    Baglivo, Francesco
    Arzilli, Guglielmo
    Privitera, Gaetano Pierpaolo
    Ferragina, Paolo
    Tozzi, Alberto Eugenio
    Rizzo, Caterina
    [J]. FRONTIERS IN PUBLIC HEALTH, 2023, 11
  • [8] ChatGPT outperforms humans in emotional awareness evaluations
    Elyoseph, Zohar
    Hadar-Shoval, Dorit
    Asraf, Kfir
    Lvovsky, Maya
    [J]. FRONTIERS IN PSYCHOLOGY, 2023, 14
  • [9] Foreign service institute, 2023, U.S. Department of State Foreign Language Training
  • [10] Gilson Aidan, 2023, JMIR Med Educ, V9, pe45312, DOI 10.2196/45312