Answering head and neck cancer questions: An assessment of ChatGPT responses

被引:22
|
作者
Wei, Kimberly [1 ]
Fritz, Christian [1 ]
Rajasekaran, Karthik [1 ,2 ,3 ]
机构
[1] Univ Penn, Dept Otorhinolaryngol Head & Neck Surg, Philadelphia, PA USA
[2] Univ Penn, Leonard Davis Inst Hlth Econ, Philadelphia, PA USA
[3] 800 Walnut St,18th Floor, Philadelphia, PA 19107 USA
关键词
Head and neck cancer; Common questions; Chatgpt; Artificial intelligence; Patient education; EDUCATION MATERIALS; INFORMATION;
D O I
10.1016/j.amjoto.2023.104085
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Purpose: To examine and compare ChatGPT versus Google websites in answering common head and neck cancer questions.Materials and methods: Commonly asked questions about head and neck cancer were obtained and inputted into both ChatGPT-4 and Google search engine. For each question, the ChatGPT response and first website search result were compiled and examined. Content quality was assessed by independent reviewers using standardized grading criteria and the modified Ensuring Quality Information for Patients (EQIP) tool. Readability was determined using the Flesch reading ease scale.Results: In total, 49 questions related to head and neck cancer were included. Google sources were on average significantly higher quality than ChatGPT responses (4.2 vs 3.6, p = 0.005). According to the EQIP tool, Google and ChatGPT had on average similar response rates per criterion (24.4 vs 20.5, p = 0.09) while Google had a significantly higher average score per question than ChatGPT (13.8 vs 11.7, p < 0.001) According to the Flesch reading ease scale, ChatGPT and Google sources were both considered similarly difficult to read (33.1 vs 37.0, p = 0.180) and at a college level (14.3 vs 14.2, p = 0.820.)Conclusion: ChatGPT responses were as challenging to read as Google sources, but poorer quality due to decreased reliability and accuracy in answering questions. Though promising, ChatGPT in its current form should not be considered dependable. Google sources are a preferred resource for patient educational materials.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer?
    Kuscu, Oguz
    Pamuk, A. Erim
    Suslu, Nilda Sutay
    Hosal, Sefik
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [2] Assessment of ChatGPT generated educational material for head and neck surgery counseling☆
    Mnajjed, Lana
    Patel, Rusha J.
    AMERICAN JOURNAL OF OTOLARYNGOLOGY, 2024, 45 (05)
  • [3] An assessment of ChatGPT's responses to frequently asked questions about cervical and breast cancer
    Ye, Zichen
    Zhang, Bo
    Zhang, Kun
    Mendez, Maria Jose Gonzalez
    Yan, Huijiao
    Wu, Tong
    Qu, Yimin
    Jiang, Yu
    Xue, Peng
    Qiao, Youlin
    BMC WOMENS HEALTH, 2024, 24 (01)
  • [4] Evaluating the performance of ChatGPT in answering questions related to urolithiasis
    Hakan Cakir
    Ufuk Caglar
    Oguzhan Yildiz
    Arda Meric
    Ali Ayranci
    Faruk Ozgor
    International Urology and Nephrology, 2024, 56 : 17 - 21
  • [5] Evaluating the performance of ChatGPT in answering questions related to urolithiasis
    Cakir, Hakan
    Caglar, Ufuk
    Yildiz, Oguzhan
    Meric, Arda
    Ayranci, Ali
    Ozgor, Faruk
    INTERNATIONAL UROLOGY AND NEPHROLOGY, 2024, 56 (01) : 17 - 21
  • [6] An Assessment of ChatGPT's Responses to Common Patient Questions About Lung Cancer Surgery: A Preliminary Clinical Evaluation of Accuracy and Relevance
    Troian, Marina
    Lovadina, Stefano
    Ravasin, Alice
    Arbore, Alessia
    Aleksova, Aneta
    Baratella, Elisa
    Cortale, Maurizio
    JOURNAL OF CLINICAL MEDICINE, 2025, 14 (05)
  • [7] Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions
    Fournier, A.
    Fallet, C.
    Sadeghipour, F.
    Perrottet, N.
    ANNALES PHARMACEUTIQUES FRANCAISES, 2024, 82 (03): : 507 - 513
  • [8] Assessing the Knowledge of ChatGPT in Answering Questions Regarding Female Urology
    Cakir, Hakan
    Caglar, Ufuk
    Halis, Ahmet
    Sarilar, Omer
    Yazili, Huseyin Burak
    Ozgor, Faruk
    UROLOGY JOURNAL, 2024, 21 (06) : 410 - 414
  • [9] Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice
    Munir, Faria
    Gehres, Anna
    Wai, David
    Song, Leah
    JOURNAL OF PHARMACY PRACTICE, 2024, 37 (06) : 1303 - 1310
  • [10] Performance of ChatGPT in Answering Clinical Questions on the Practical Guideline of Blepharoptosis
    Shiraishi, Makoto
    Tomioka, Yoko
    Miyakuni, Ami
    Ishii, Saaya
    Hori, Asei
    Park, Hwayoung
    Ohba, Jun
    Okazaki, Mutsumi
    AESTHETIC PLASTIC SURGERY, 2024, : 2389 - 2398