共 12 条
Is artificial intelligence ready to replace specialist doctors entirely? ENT specialists vs ChatGPT: 1-0, ball at the center
被引:19
作者:
Dallari, Virginia
[1
,2
]
Sacchetto, Andrea
[1
,3
]
Saetti, Roberto
[3
]
Calabrese, Luca
[4
]
Vittadello, Fabio
[5
]
Gazzini, Luca
[1
,4
]
机构:
[1] Y CEORL HNS, Young Confederat European ORL HNS, Dublin, Ireland
[2] Univ Verona, Head & Neck Dept, Unit Otorhinolaryngol, Piazzale LA Scuro 10, I-37134 Verona, Italy
[3] AULSS 8 Berica, Osped San Bortolo, Dept Dermatol, Vicenza, Italy
[4] Paracelsus Med Univ PMU, Hosp Bolzano SABES ASDAA, Dept Otorhinolaryngol Head & Neck Surg, Teaching Hosp, Bolzano, Italy
[5] Explora Res & Stat Anal, Padua, Italy
关键词:
Machine learning;
ChatGPT;
Otolaryngology;
Natural language processing;
Research;
D O I:
10.1007/s00405-023-08321-1
中图分类号:
R76 [耳鼻咽喉科学];
学科分类号:
100213 ;
摘要:
Purpose The purpose of this study is to evaluate ChatGPT's responses to Ear, Nose and Throat (ENT) clinical cases and compare them with the responses of ENT specialists. Methods We have hypothesized 10 scenarios, based on ENT daily experience, with the same primary symptom. We have constructed 20 clinical cases, 2 for each scenario. We described them to 3 ENT specialists and ChatGPT. The difficulty of the clinical cases was assessed by the 5 ENT authors of this article. The responses of ChatGPT were evaluated by the 5 ENT authors of this article for correctness and consistency with the responses of the 3 ENT experts. To verify the stability of ChatGPT's responses, we conducted the searches, always from the same account, for 5 consecutive days. Methods We have hypothesized 10 scenarios, based on ENT daily experience, with the same primary symptom. We have constructed 20 clinical cases, 2 for each scenario. We described them to 3 ENT specialists and ChatGPT.The difficulty of the clinical cases was assessed by the 5 ENT authors of this article. The responses of ChatGPT were evaluated by the 5 ENT authors of this article for correctness and consistency with the responses of the 3 ENT experts. To verify the stability of ChatGPT's responses, we conducted the searches, always from the same account, for 5 consecutive days. Results Among the 20 cases, 8 were rated as low complexity, 6 as moderate complexity and 6 as high complexity. The overall mean correctness and consistency score of ChatGPT responses was 3.80 (SD 1.02) and 2.89 (SD 1.24), respectively. We did not find a statistically significant difference in the average ChatGPT correctness and coherence score according to case complexity. The total intraclass correlation coefficient (ICC) for the stability of the correctness and consistency of ChatGPT was 0.763 (95% confidence interval [CI] 0.553-0.895) and 0.837 (95% CI 0.689-0.927), respectively. Conclusions Our results revealed the potential usefulness of ChatGPT in ENT diagnosis. The instability in responses and the inability to recognise certain clinical elements are its main limitations.
引用
收藏
页码:995 / 1023
页数:29
相关论文