BPPV Information on Google Versus AI (ChatGPT)

被引:29
作者
Bellinger, Jeffrey R. [1 ]
de la Chapa, Julian S. [1 ]
Kwak, Minhie W. [1 ]
Ramos, Gabriel A. [1 ]
Morrison, Daniel [1 ]
Kesser, Bradley W. [1 ,2 ]
机构
[1] Univ Virginia, Sch Med, Dept Otolaryngol Head & Neck Surg, Charlottesville, VA USA
[2] Univ Virginia, Sch Med, Dept Otolaryngol Head & Neck Surg, POB 800713, Charlottesville, VA 22903 USA
关键词
artificial intelligence; benign paroxysmal positional vertigo; ChatGPT; Google; online information; quality; readability; understandability; PATIENT EDUCATION MATERIALS; READABILITY ASSESSMENT; INTERNET; QUALITY; WEBSITES;
D O I
10.1002/ohn.506
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Objective. To quantitatively compare online patient education materials found using traditional search engines (Google) versus conversational Artificial Intelligence (AI) models (ChatGPT) for benign paroxysmal positional vertigo (BPPV).Study Design. The top 30 Google search results for "benign paroxysmal positional vertigo" were compared to the OpenAI conversational AI language model, ChatGPT, responses for 5 common patient questions posed about BPPV in February 2023. Metrics included readability, quality, understandability, and actionability.Setting. Online information.Methods. Validated online information metrics including Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease (FRE), DISCERN instrument score, and Patient Education Materials Assessment Tool for Printed Materials were analyzed and scored by reviewers.Results. Mean readability scores, FKGL and FRE, for the Google webpages were 10.7 +/- 2.6 and 46.5 +/- 14.3, respectively. ChatGPT responses had a higher FKGL score of 13.9 +/- 2.5 (P < .001) and a lower FRE score of 34.9 +/- 11.2 (P = .005), both corresponding to lower readability. The Google webpages had a DISCERN part 2 score of 25.4 +/- 7.5 compared to the individual ChatGPT responses with a score of 17.5 +/- 3.9 (P = .001), and the combined ChatGPT responses with a score of 25.0 +/- 0.9 (P = .928). The average scores of the reviewers for all ChatGPT responses for accuracy were 4.19 +/- 0.82 and 4.31 +/- 0.67 for currency.Conclusion. The results of this study suggest that the information on ChatGPT is more difficult to read, of lower quality, and more difficult to comprehend compared to information on Google searches.
引用
收藏
页码:1504 / 1511
页数:8
相关论文
共 55 条
[1]   'A heartbeat moment': qualitative study of GP views of patients bringing health information from the internet to a consultation [J].
Ahluwalia, Sanjiv ;
Murray, Elizabeth ;
Stevenson, Fiona ;
Kerr, Cicely ;
Burns, Jo .
BRITISH JOURNAL OF GENERAL PRACTICE, 2010, 60 (571) :88-94
[2]   Readability and quality assessment of websites related to microtia and aural atresia [J].
Alamoudi, Uthman ;
Hong, Paul .
INTERNATIONAL JOURNAL OF PEDIATRIC OTORHINOLARYNGOLOGY, 2015, 79 (02) :151-156
[3]   An Appraisal of Printed Online Education Materials on Spasmodic Dysphonia [J].
Alwani, Mohamedkazim M. ;
Campa, Khaled A. ;
Svenstrup, Thomas J. ;
Bandali, Elhaam H. ;
Anthony, Benjamin P. .
JOURNAL OF VOICE, 2021, 35 (04) :659.e1-659.e9
[4]  
[Anonymous], 2013, The value of Google result positioning Online Advertising Network, Issue
[5]   Assessing Readability of Patient Education Materials: Current Role in Orthopaedics [J].
Badarudeen, Sameer ;
Sabharwal, Sanjeev .
CLINICAL ORTHOPAEDICS AND RELATED RESEARCH, 2010, 468 (10) :2572-2580
[6]  
Bianchi T., 2023, WORLDWIDE DESKTOP MA
[7]   Quality of online otolaryngology health information [J].
Biggs, T. C. ;
Jayakody, N. ;
Best, K. ;
King, E. V. .
JOURNAL OF LARYNGOLOGY AND OTOLOGY, 2018, 132 (06) :560-563
[8]   ChatGPT: five priorities for research [J].
Bockting, Claudi ;
van Dis, Eva A. M. ;
Bollen, Johan ;
van Rooij, Robert ;
Zuidema, Willem L. .
NATURE, 2023, 614 (7947) :224-226
[9]  
Castelvecchi Davide, 2022, Nature, DOI 10.1038/d41586-022-04383-z
[10]   Readability assessment of internet-based patient education materials related to endoscopic sinus surgery [J].
Cherla, Deepa V. ;
Sanghvi, Saurin ;
Choudhry, Osamah J. ;
Liu, James K. ;
Eloy, Jean Anderson .
LARYNGOSCOPE, 2012, 122 (08) :1649-1654