Answering head and neck cancer questions: An assessment of ChatGPT responses

被引:22
|
作者
Wei, Kimberly [1 ]
Fritz, Christian [1 ]
Rajasekaran, Karthik [1 ,2 ,3 ]
机构
[1] Univ Penn, Dept Otorhinolaryngol Head & Neck Surg, Philadelphia, PA USA
[2] Univ Penn, Leonard Davis Inst Hlth Econ, Philadelphia, PA USA
[3] 800 Walnut St,18th Floor, Philadelphia, PA 19107 USA
关键词
Head and neck cancer; Common questions; Chatgpt; Artificial intelligence; Patient education; EDUCATION MATERIALS; INFORMATION;
D O I
10.1016/j.amjoto.2023.104085
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Purpose: To examine and compare ChatGPT versus Google websites in answering common head and neck cancer questions.Materials and methods: Commonly asked questions about head and neck cancer were obtained and inputted into both ChatGPT-4 and Google search engine. For each question, the ChatGPT response and first website search result were compiled and examined. Content quality was assessed by independent reviewers using standardized grading criteria and the modified Ensuring Quality Information for Patients (EQIP) tool. Readability was determined using the Flesch reading ease scale.Results: In total, 49 questions related to head and neck cancer were included. Google sources were on average significantly higher quality than ChatGPT responses (4.2 vs 3.6, p = 0.005). According to the EQIP tool, Google and ChatGPT had on average similar response rates per criterion (24.4 vs 20.5, p = 0.09) while Google had a significantly higher average score per question than ChatGPT (13.8 vs 11.7, p < 0.001) According to the Flesch reading ease scale, ChatGPT and Google sources were both considered similarly difficult to read (33.1 vs 37.0, p = 0.180) and at a college level (14.3 vs 14.2, p = 0.820.)Conclusion: ChatGPT responses were as challenging to read as Google sources, but poorer quality due to decreased reliability and accuracy in answering questions. Though promising, ChatGPT in its current form should not be considered dependable. Google sources are a preferred resource for patient educational materials.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] ChatGPT in Head and Neck Oncology-Opportunities and Challenges
    Sarma, Gautam
    Kashyap, Hrishikesh
    Medhi, Partha Pratim
    INDIAN JOURNAL OF OTOLARYNGOLOGY AND HEAD & NECK SURGERY, 2023, 76 (1) : 1425 - 1429
  • [22] ChatGPT provides acceptable responses to patient questions regarding common shoulder pathology
    Ghilzai, Umar
    Fiedler, Benjamin
    Ghali, Abdullah
    Singh, Aaron
    Cass, Benjamin
    Young, Allan
    Ahmed, Adil Shahzad
    SHOULDER & ELBOW, 2024,
  • [23] Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery
    Samaan, Jamil S.
    Yeo, Yee Hui
    Rajeev, Nithya
    Hawley, Lauren
    Abel, Stuart
    Ng, Wee Han
    Srinivasan, Nitin
    Park, Justin
    Burch, Miguel
    Watson, Rabindra
    Liran, Omer
    Samakar, Kamran
    OBESITY SURGERY, 2023, 33 (06) : 1790 - 1796
  • [24] Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis
    Ye, Fan
    Zhang, He
    Luo, Xin
    Wu, Tong
    Yang, Qintai
    Shi, Zhaohui
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 171 (02) : 571 - 577
  • [25] ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions
    Buhr, Christoph Raphael
    Smith, Harry
    Huppertz, Tilman
    Bahr-Hamm, Katharina
    Matthias, Christoph
    Blaikie, Andrew
    Kelsey, Tom
    Kuhn, Sebastian
    Eckrich, Jonas
    JMIR MEDICAL EDUCATION, 2023, 9
  • [26] Analyzing the performance of ChatGPT in answering inquiries about cervical cancer
    Yurtcu, Engin
    Ozvural, Seyfettin
    Keyif, Betul
    INTERNATIONAL JOURNAL OF GYNECOLOGY & OBSTETRICS, 2025, 168 (02) : 502 - 507
  • [27] Assessing the quality of ChatGPT's responses to questions related to radiofrequency ablation for varicose veins
    Anees, Muhammad
    Shaikh, Fareed Ahmed
    Shaikh, Hafsah
    Siddiqui, Nadeem Ahmed
    Rehman, Zia Ur
    JOURNAL OF VASCULAR SURGERY-VENOUS AND LYMPHATIC DISORDERS, 2025, 13 (01)
  • [28] Assessing the accuracy and utility of ChatGPT responses to patient questions regarding posterior lumbar decompression
    Giakas, Alec M.
    Narayanan, Rajkishen
    Ezeonu, Teeto
    Dalton, Jonathan
    Lee, Yunsoo
    Henry, Tyler
    Mangan, John
    Schroeder, Gregory
    Vaccaro, Alexander
    Kepler, Christopher
    ARTIFICIAL INTELLIGENCE SURGERY, 2024, 4 (03): : 233 - 246
  • [29] Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery
    Shaari, Ariana L.
    Fano, Adam N.
    Anakwenze, Oke
    Klifto, Christopher
    SHOULDER & ELBOW, 2024, 16 (04) : 429 - 435
  • [30] Comparison of the performances between ChatGPT and Gemini in answering questions on viral hepatitis
    Sahin Ozdemir, Meryem
    Ozdemir, Yusuf Emre
    SCIENTIFIC REPORTS, 2025, 15 (01):