Evaluating AI in medicine: a comparative analysis of expert and ChatGPT responses to colorectal cancer questions

被引:23
作者
Peng, Wen [1 ,2 ]
Feng, Yifei [1 ,2 ]
Yao, Cui [1 ,2 ]
Zhang, Sheng [3 ]
Zhuo, Han [4 ]
Qiu, Tianzhu [5 ]
Zhang, Yi [1 ,2 ]
Tang, Junwei [1 ,2 ]
Gu, Yanhong [5 ]
Sun, Yueming [1 ,2 ]
机构
[1] Nanjing Med Univ, Affiliated Hosp 1, Dept Gen Surg, Nanjing 210029, Jiangsu, Peoples R China
[2] Nanjing Med Univ, Sch Clin Med 1, Nanjing, Peoples R China
[3] Nanjing Med Univ, Affiliated Hosp 1, Dept Radiotherapy, Nanjing, Peoples R China
[4] Nanjing Med Univ, Affiliated Hosp 1, Dept Intervent, Nanjing, Peoples R China
[5] Nanjing Med Univ, Affiliated Hosp 1, Dept Oncol, Nanjing, Peoples R China
关键词
D O I
10.1038/s41598-024-52853-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Colorectal cancer (CRC) is a global health challenge, and patient education plays a crucial role in its early detection and treatment. Despite progress in AI technology, as exemplified by transformer-like models such as ChatGPT, there remains a lack of in-depth understanding of their efficacy for medical purposes. We aimed to assess the proficiency of ChatGPT in the field of popular science, specifically in answering questions related to CRC diagnosis and treatment, using the book "Colorectal Cancer: Your Questions Answered" as a reference. In general, 131 valid questions from the book were manually input into ChatGPT. Responses were evaluated by clinical physicians in the relevant fields based on comprehensiveness and accuracy of information, and scores were standardized for comparison. Not surprisingly, ChatGPT showed high reproducibility in its responses, with high uniformity in comprehensiveness, accuracy, and final scores. However, the mean scores of ChatGPT's responses were significantly lower than the benchmarks, indicating it has not reached an expert level of competence in CRC. While it could provide accurate information, it lacked in comprehensiveness. Notably, ChatGPT performed well in domains of radiation therapy, interventional therapy, stoma care, venous care, and pain control, almost rivaling the benchmarks, but fell short in basic information, surgery, and internal medicine domains. While ChatGPT demonstrated promise in specific domains, its general efficiency in providing CRC information falls short of expert standards, indicating the need for further advancements and improvements in AI technology for patient education in healthcare.
引用
收藏
页数:16
相关论文
共 17 条
[11]  
Neto PC, 2024, Arxiv, DOI arXiv:2301.02608
[12]   iMIL4PATH: A Semi-Supervised Interpretable Approach for Colorectal Whole-Slide Images [J].
Neto, Pedro C. ;
Oliveira, Sara P. ;
Montezuma, Diana ;
Fraga, Joao ;
Monteiro, Ana ;
Ribeiro, Liliana ;
Goncalves, Sofia ;
Pinto, Isabel M. ;
Cardoso, Jaime S. .
CANCERS, 2022, 14 (10)
[13]   Quality of Web-Based Patient Information on Robotic Radical Cystectomy Remains Poor: A Standardized Assessment [J].
Pandolfo, Savio Domenico ;
Aveta, Achille ;
Loizzo, Davide ;
Crocerossa, Fabio ;
La Rocca, Roberto ;
Del Giudice, Francesco ;
Chung, Benjamin, I ;
Wu, Zhenjie ;
Lucarelli, Giuseppe ;
Mirone, Vincenzo ;
Imbimbo, Ciro ;
Autorino, Riccardo .
UROLOGY PRACTICE, 2022, 9 (05) :498-503
[14]  
Rahsepar AA, 2023, RADIOLOGY, V307, DOI 10.1148/radiol.230922
[15]   ChatGPT and Open-AI Models: A Preliminary Review [J].
Roumeliotis, Konstantinos I. ;
Tselikas, Nikolaos D. .
FUTURE INTERNET, 2023, 15 (06)
[16]   The Andersen Model of Total Patient Delay: a systematic review of its application in cancer diagnosis [J].
Walter, Fiona ;
Webster, Andrew ;
Scott, Suzanne ;
Emery, Jon .
JOURNAL OF HEALTH SERVICES RESEARCH & POLICY, 2012, 17 (02) :110-118
[17]   The potential impact of ChatGPT in clinical and translational medicine [J].
Xue, Vivian Weiwen ;
Lei, Pinggui ;
Cho, William C. .
CLINICAL AND TRANSLATIONAL MEDICINE, 2023, 13 (03)