Reliability of artificial intelligence chatbot responses to frequently asked questions in breast surgical oncology

被引:13
作者
Roldan-Vasquez, Estefania [1 ]
Mitri, Samir [1 ]
Bhasin, Shreya [1 ,2 ]
Bharani, Tina [1 ,3 ]
Capasso, Kathryn [1 ]
Haslinger, Michelle [1 ]
Sharma, Ranjna [1 ]
James, Ted A. [1 ,4 ]
机构
[1] Harvard Med Sch, Beth Israel Deaconess Med Ctr, Breast Surg Oncol, Dept Surg, Boston, MA USA
[2] Univ Rochester, Sch Med & Dent, Rochester, NY USA
[3] Harvard Med Sch, Brigham & Womens Hosp, Dept Surg, Boston, MA USA
[4] Harvard Med Sch, Beth Israel Deaconess Med Ctr, Dept Surg, Breast Surg Oncol,Linsey BreastCare Ctr,Acad Affai, 330 Brookline Ave, Boston, MA 02115 USA
关键词
artificial intelligence; breast cancer; ChatGPT; education; surgery;
D O I
10.1002/jso.27715
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Introduction: Artificial intelligence (AI)-driven chatbots, capable of simulating human-like conversations, are becoming more prevalent in healthcare. While this technology offers potential benefits in patient engagement and information accessibility, it raises concerns about potential misuse, misinformation, inaccuracies, and ethical challenges. Methods: This study evaluated a publicly available AI chatbot, ChatGPT, in its responses to nine questions related to breast cancer surgery selected from the American Society of Breast Surgeons' frequently asked questions (FAQ) patient education website. Four breast surgical oncologists assessed the responses for accuracy and reliability using a five-point Likert scale and the Patient Education Materials Assessment (PEMAT) Tool. Results: The average reliability score for ChatGPT in answering breast cancer surgery questions was 3.98 out of 5.00. Surgeons unanimously found the responses understandable and actionable per the PEMAT criteria. The consensus found ChatGPT's overall performance was appropriate, with minor or no inaccuracies. Conclusion: ChatGPT demonstrates good reliability in responding to breast cancer surgery queries, with minor, nonharmful inaccuracies. Its answers are accurate, clear, and easy to comprehend. Notably, ChatGPT acknowledged its informational role and did not attempt to replace medical advice or discourage users from seeking input from a healthcare professional.
引用
收藏
页码:188 / 203
页数:16
相关论文
共 18 条
[1]  
[Anonymous], PATIENT ED MAT ASSES
[2]   Online Health Information Seeking in the Context of the Medical Consultation in Switzerland [J].
Caiata-Zufferey, Maria ;
Abraham, Andrea ;
Sommerhalder, Kathrin ;
Schulz, Peter J. .
QUALITATIVE HEALTH RESEARCH, 2010, 20 (08) :1050-1061
[3]   Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios [J].
Cascella, Marco ;
Montomoli, Jonathan ;
Bellini, Valentina ;
Bignami, Elena .
JOURNAL OF MEDICAL SYSTEMS, 2023, 47 (01)
[4]  
Computer and Internet use, US
[5]   In Reference to "Role of Chat GPT in Public Health", to Highlight the AI's Incorrect Reference Generation [J].
Frosolini, Andrea ;
Gennaro, Paolo ;
Cascino, Flavia ;
Gabriele, Guido .
ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (10) :2120-2122
[6]  
Introducing ChatGPT, OpenAI
[7]  
Johnson Douglas, 2023, Res Sq, DOI 10.21203/rs.3.rs-2566942/v1
[8]   Impact of the Internet on Patient -Physician Communication [J].
Langford, Aisha T. ;
Roberts, Timothy ;
Gupta, Jaytin ;
Orellana, Kerli T. ;
Loeb, Stacy .
EUROPEAN UROLOGY FOCUS, 2020, 6 (03) :440-444
[9]   Utility of ChatGPT in Clinical Practice [J].
Liu, Jialin ;
Wang, Changyu ;
Liu, Siru .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25
[10]   Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases [J].
Lukac, Stefan ;
Dayan, Davut ;
Fink, Visnja ;
Leinert, Elena ;
Hartkopf, Andreas ;
Veselinovic, Kristina ;
Janni, Wolfgang ;
Rack, Brigitte ;
Pfister, Kerstin ;
Heitmeir, Benedikt ;
Ebner, Florian .
ARCHIVES OF GYNECOLOGY AND OBSTETRICS, 2023, 308 (06) :1831-1844