ChatGPT Performs on the Chinese National Medical Licensing Examination

被引:45
作者
Wang, Xinyi [1 ]
Gong, Zhenye [1 ]
Wang, Guoxin [1 ]
Jia, Jingdan [1 ]
Xu, Ying [1 ]
Zhao, Jialu [1 ]
Fan, Qingye [1 ]
Wu, Shaun [2 ]
Hu, Weiguo [1 ]
Li, Xiaoyang [1 ]
机构
[1] Shanghai Jiao Tong Univ, Ruijin Hosp, Sch Med, Dept Med Educ, 197 Ruijin Rd 2, Shanghai 200025, Peoples R China
[2] WORK Med Technol Grp LTD, Hangzhou, Peoples R China
关键词
ChatGPT; Chinese National Medical Licensing Examination; Medical student;
D O I
10.1007/s10916-023-01961-0
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
ChatGPT, a language model developed by OpenAI, uses a 175 billion parameter Transformer architecture for natural language processing tasks. This study aimed to compare the knowledge and interpretation ability of ChatGPT with those of medical students in China by administering the Chinese National Medical Licensing Examination (NMLE) to both ChatGPT and medical students. We evaluated the performance of ChatGPT in three years' worth of the NMLE, which consists of four units. At the same time, the exam results were compared to those of medical students who had studied for five years at medical colleges. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was related to the year in which the exam questions were released. ChatGPT's knowledge and interpretation ability for the NMLE were not yet comparable to those of medical students in China. It is probable that these abilities will improve through deep learning.
引用
收藏
页数:5
相关论文
共 17 条
  • [1] ChatGPT and the Future of Medical Writing
    Biswas, Som
    [J]. RADIOLOGY, 2023, 307 (02)
  • [2] Bommarito J, 2023, arXiv, DOI DOI 10.48550/ARXIV.2301.04408
  • [3] Bommarito M., 2022, arXiv, DOI [10.48550/arXiv.2212.14402, DOI 10.48550/ARXIV.2212.14402]
  • [4] Das A, 2022, PROCEEDINGS OF THE 21ST WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2022), P285
  • [5] Gao CA, 2022, bioRxiv, DOI [10.1101/2022.12.23.521610, 10.1101/2022.12.23.521610, DOI 10.1101/2022.12.23.521610V1, DOI 10.1101/2022.12.23.521610]
  • [6] Gilson Aidan, 2023, JMIR Med Educ, V9, pe45312, DOI 10.2196/45312
  • [7] Guo BY, 2023, Arxiv, DOI arXiv:2301.07597
  • [8] Hacker P, 2023, Arxiv, DOI [arXiv:2302.02337, 10.48550/ARXIV.2302.02337]
  • [9] Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study
    Huh, Sun
    [J]. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20
  • [10] jeblick K., 2022, ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports Internet. arXiv