Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study

被引:13
作者
Yu, Peng [1 ]
Fang, Changchang [1 ]
Liu, Xiaolin [2 ]
Fu, Wanying [1 ]
Ling, Jitao [1 ]
Yan, Zhiwei [3 ]
Jiang, Yuan [4 ]
Cao, Zhengyu [4 ]
Wu, Maoxiong [4 ]
Chen, Zhiteng [4 ]
Zhu, Wengen [5 ]
Zhang, Yuling [4 ]
Abudukeremu, Ayiguli [4 ]
Wang, Yue [4 ]
Liu, Xiao [4 ,6 ]
Wang, Jingfeng [4 ]
机构
[1] Nanchang Univ, Affiliated Hosp 2, Dept Endocrine, Nanchang, Jiangxi, Peoples R China
[2] Sun Yat Sen Univ, Affiliated Hosp 8, Dept Cardiol, Shenzhen, Peoples R China
[3] Shenyang Sport Univ, Coll Kinesiol, Shenyang, Peoples R China
[4] Sun Yat Sen Univ, Sun Yat Sen Mem Hosp, Dept Cardiol, Guangzhou, Peoples R China
[5] Sun Yat Sen Univ, Affiliated Hosp 1, Dept Cardiol, Guangzhou, Peoples R China
[6] Sun Yat Sen Univ, Sun Yat Sen Mem Hosp, Dept Cardiol, 107 Yanjiang West Rd, Guangzhou, Peoples R China
来源
JMIR MEDICAL EDUCATION | 2024年 / 10卷
关键词
ChatGPT; Chinese Postgraduate Examination for Clinical Medicine; medical student; performance; artificial intelligence; medical care; qualitative feedback; medical education; clinical decision-making;
D O I
10.2196/48514
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Background: ChatGPT, an artificial intelligence (AI) based on large-scale language models, has sparked interest in the field of health care. Nonetheless, the capabilities of AI in text comprehension and generation are constrained by the quality and volume of available training data for a specific language, and the performance of AI across different languages requires further investigation. While AI harbors substantial potential in medicine, it is imperative to tackle challenges such as the formulation of clinical care standards; facilitating cultural transitions in medical education and practice; and managing ethical issues including data privacy, consent, and bias. Objective: The study aimed to evaluate ChatGPT's performance in processing Chinese Postgraduate Examination for Clinical Medicine questions, assess its clinical reasoning ability, investigate potential limitations with the Chinese language, and explore its potential as a valuable tool for medical professionals in the Chinese context. Methods: A data set of Chinese Postgraduate Examination for Clinical Medicine questions was used to assess the effectiveness of ChatGPT's (version 3.5) medical knowledge in the Chinese language, which has a data set of 165 medical questions that were divided into three categories: (1) common questions (n=90) assessing basic medical knowledge, (2) case analysis questions (n=45) focusing on clinical decision -making through patient case evaluations, and (3) multichoice questions (n=30) requiring the selection of multiple correct answers. First of all, we assessed whether ChatGPT could meet the stringent cutoff score defined by the government agency, which requires a performance within the top 20% of candidates. Additionally, in our evaluation of ChatGPT's performance on both original and encoded medical questions, 3 primary indicators were used: accuracy, concordance (which validates the answer), and the frequency of insights. Results: Our evaluation revealed that ChatGPT scored 153.5 out of 300 for original questions in Chinese, which signifies the minimum score set to ensure that at least 20% more candidates pass than the enrollment quota. However, ChatGPT had low accuracy in answering open-ended medical questions, with only 31.5% total accuracy. The accuracy for common questions, multichoice questions, and case analysis questions was 42%, 37%, and 17%, respectively. ChatGPT achieved a 90% concordance across all questions. Among correct responses, the concordance was 100%, significantly exceeding that of incorrect responses (n=57, 50%; P<.001). ChatGPT provided innovative insights for 80% (n=132) of all questions, with an average of 2.95 insights per accurate response. Conclusions: Although ChatGPT surpassed the passing threshold for the Chinese Postgraduate Examination for Clinical Medicine, its performance in answering open-ended medical questions was suboptimal. Nonetheless, ChatGPT exhibited high internal concordance and the ability to generate multiple insights in the Chinese language. Future research should investigate the language -based discrepancies in ChatGPT's performance within the health care context.
引用
收藏
页数:9
相关论文
共 16 条
[1]   Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices [J].
Abramoff, Michael D. ;
Lavin, Philip T. ;
Birch, Michele ;
Shah, Nilay ;
Folk, James C. .
NPJ DIGITAL MEDICINE, 2018, 1
[2]  
[Anonymous], Considerations for the practical impact of AI in healthcare
[3]   Patient Perception of Plain-Language Medical Notes Generated Using Artificial Intelligence Software: Pilot Mixed-Methods Study [J].
Bala, Sandeep ;
Keniston, Angela ;
Burden, Marisha .
JMIR FORMATIVE RESEARCH, 2020, 4 (06)
[4]   Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients [J].
Delahanty, Ryan J. ;
Kaufman, David ;
Jones, Spencer S. .
CRITICAL CARE MEDICINE, 2018, 46 (06) :E481-E488
[5]   Artificial intelligence to support clinical decision-making processes [J].
Garcia-Vidal, Carolina ;
Sanjuan, Gemma ;
Puerta-Alcalde, Pedro ;
Moreno-Garcia, Estela ;
Soriano, Alex .
EBIOMEDICINE, 2019, 46 :27-29
[6]  
Haleem A, 2019, Curr Med Res Pract, V9, P231, DOI 10.1016/j.cmrp.2019.11.005
[7]  
Haleem Abid, 2020, J Clin Orthop Trauma, V11, pS80, DOI 10.1016/j.jcot.2019.06.012
[8]   Information and Artificial Intelligence [J].
Jha, Saurabh ;
Topol, Eric J. .
JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2018, 15 (03) :509-511
[9]   Exploring the use of machine learning for risk adjustment: A comparison of standard and penalized linear regression models in predicting health care costs in older adults [J].
Kan, Hong J. ;
Kharrazi, Hadi ;
Chang, Hsien-Yen ;
Bodycombe, Dave ;
Lemke, Klaus ;
Weiner, Jonathan P. .
PLOS ONE, 2019, 14 (03)
[10]   Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models [J].
Kung, Tiffany H. ;
Cheatham, Morgan ;
Medenilla, Arielle ;
Sillos, Czarina ;
De Leon, Lorie ;
Elepano, Camille ;
Madriaga, Maria ;
Aggabao, Rimel ;
Diaz-Candido, Giezel ;
Maningo, James ;
Tseng, Victor .
PLOS DIGITAL HEALTH, 2023, 2 (02)