Head-to-Head Comparison of ChatGPT Versus Google Search for Medical Knowledge Acquisition

被引:55
作者
Ayoub, Noel F. [1 ,2 ]
Lee, Yu-Jin [1 ]
Grimm, David [1 ]
Divi, Vasu [1 ]
机构
[1] Stanford Univ, Sch Med, Dept Otolaryngol Head & Neck Surg, Div Head & Neck Surg, Stanford, CA USA
[2] Stanford Univ, Sch Med, Dept Otolaryngol Head & Neck Surg, Div Head & Neck Surg, 801 Welch Rd, Stanford, CA 94305 USA
关键词
artificial intelligence; ChatGPT; generative artificial intelligence; health literacy; large language models; online search engines; patient education;
D O I
10.1002/ohn.465
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
ObjectiveChat Generative Pretrained Transformer (ChatGPT) is the newest iteration of OpenAI's generative artificial intelligence (AI) with the potential to influence many facets of life, including health care. This study sought to assess ChatGPT's capabilities as a source of medical knowledge, using Google Search as a comparison. Study DesignCross-sectional analysis. SettingOnline using ChatGPT, Google Seach, and Clinical Practice Guidelines (CPG). MethodsCPG Plain Language Summaries for 6 conditions were obtained. Questions relevant to specific conditions were developed and input into ChatGPT and Google Search. All questions were written from the patient perspective and sought (1) general medical knowledge or (2) medical recommendations, with varying levels of acuity (urgent or emergent vs routine clinical scenarios). Two blinded reviewers scored all passages and compared results from ChatGPT and Google Search, using the Patient Education Material Assessment Tool (PEMAT-P) as the primary outcome. Additional customized questions were developed that assessed the medical content of the passages. ResultsThe overall average PEMAT-P score for medical advice was 68.2% (standard deviation [SD]: 4.4) for ChatGPT and 89.4% (SD: 5.9) for Google Search (p < .001). There was a statistically significant difference in the PEMAT-P score by source (p < .001) but not by urgency of the clinical situation (p = .613). ChatGPT scored significantly higher than Google Search (87% vs 78%, p = .012) for patient education questions. ConclusionChatGPT fared better than Google Search when offering general medical knowledge, but it scored worse when providing medical recommendations. Health care providers should strive to understand the potential benefits and ramifications of generative AI to guide patients appropriately.
引用
收藏
页码:1484 / 1491
页数:8
相关论文
共 21 条
[1]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[2]  
[Anonymous], 2022, CLIN PRACT GUID
[3]  
Beus J., 2020, WHY ALMOST EVERYTHIN
[4]   The impact of health literacy in the care of surgical patients: a qualitative systematic review [J].
De Oliveira, Gildasio S., Jr. ;
McCarthy, Robert J. ;
Wolf, Michael S. ;
Holl, Jane .
BMC SURGERY, 2015, 15
[5]  
Eligibility Team, 2019, EL MED
[6]   The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers [J].
Eysenbach, Gunther .
JMIR MEDICAL EDUCATION, 2023, 9
[7]   Artificial Intelligence, Values, and Alignment [J].
Gabriel, Iason .
MINDS AND MACHINES, 2020, 30 (03) :411-437
[8]   How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment [J].
Gilson, Aidan ;
Safranek, Conrad W. ;
Huang, Thomas ;
Socrates, Vimig ;
Chi, Ling ;
Taylor, Richard Andrew ;
Chartash, David .
JMIR MEDICAL EDUCATION, 2023, 9
[9]   Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift [J].
Hopkins, Ashley M. ;
Logan, Jessica M. ;
Kichenadasse, Ganessan ;
Sorich, Michael J. .
JNCI CANCER SPECTRUM, 2023, 7 (02)
[10]   Physician overestimation of patient literacy: A potential source of health care disparities [J].
Kelly, P. Adam ;
Haidet, Paul .
PATIENT EDUCATION AND COUNSELING, 2007, 66 (01) :119-122