Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study

被引:108
作者
Rao, Arya [1 ,2 ,3 ]
Pang, Michael [1 ,2 ,3 ]
Kim, John [1 ,2 ,3 ]
Kamineni, Meghana [1 ,2 ,3 ]
Lie, Winston [1 ,2 ,3 ]
Prasad, Anoop K. [1 ,2 ,3 ]
Landman, Adam [2 ,4 ]
Dreyer, Keith [2 ,5 ]
Succi, Marc D. [1 ,2 ,3 ,6 ]
机构
[1] Massachusetts Gen Hosp, Innovat Operat Res Ctr MESH IO, Medically Engn Solut Healthcare Incubator, Boston, MA USA
[2] Harvard Med Sch, Boston, MA USA
[3] Massachusetts Gen Hosp, Dept Radiol, 55 Fruit St, Boston, MA 02114 USA
[4] Brigham & Womens Hosp, Dept Radiol, Boston, MA USA
[5] Mass Gen Brigham, Data Sci Off, Boston, MA USA
[6] Mass Gen Brigham, Mass Gen Brigham Innovat, Boston, MA USA
关键词
large language models; LLMs; artificial intelligence; AI; clinical decision support; clinical vignettes; ChatGPT; Generative Pre-trained Transformer; GPT; utility; development; usability; chatbot; accuracy; decision-making; ARTIFICIAL-INTELLIGENCE;
D O I
10.2196/48659
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Large language model (LLM)-based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. Objective: This study aimed to evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. Methods: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT's performance on clinical tasks. Results: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (beta=-15.8%; P<.001) and clinical management (beta=-7.4%; P=.02) question types. Conclusions: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT's training data set.
引用
收藏
页数:9
相关论文
共 29 条
  • [1] [Anonymous], 2003, Unequal treatment: Confronting racial and ethnic disparities in health care, DOI 10.17226/12875
  • [2] [Anonymous], 2022, CHATGPT OPT LANG MOD
  • [3] The potential of artificial intelligence to improve patient safety: a scoping review
    Bates, David W.
    Levine, David
    Syrowatka, Ania
    Kuznetsova, Masha
    Craig, Kelly Jean Thomas
    Rui, Angela
    Jackson, Gretchen Purcell
    Rhee, Kyu
    [J]. NPJ DIGITAL MEDICINE, 2021, 4 (01)
  • [4] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Bender, Emily M.
    Gebru, Timnit
    McMillan-Major, Angelina
    Shmitchell, Shmargaret
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 610 - 623
  • [5] Bommarito J., 2023, PREPRINT, DOI [10.2139/ssrn.4322372, DOI 10.2139/SSRN.4322372]
  • [6] Bommarito MJII, 2022, PREPRINT, DOI [10.2139/ssrn.4314839, DOI 10.2139/SSRN.4314839]
  • [7] Reducing Bias in Healthcare Artificial Intelligence
    Byrne, Matthew D.
    [J]. JOURNAL OF PERIANESTHESIA NURSING, 2021, 36 (03) : 313 - 316
  • [8] Case studies, Merck Manual, Professional Version
  • [9] Choi JH, 2022, J LEGAL EDUC, V71, P387
  • [10] RadTranslate: An Artificial Intelligence-Powered Intervention for Urgent Imaging to Enhance Care Equity for Patients With Limited English Proficiency During the COVID-19 Pandemic
    Chonde, B. Daniel
    Pourvaziri, Ali
    Williams, Joy
    McGowan, Jennifer
    Moskos, Margo
    Alvarez, Carmen
    Narayan, K. Anand
    Daye, Dania
    Flores, J. Efren
    Succi, D. Marc
    [J]. JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2021, 18 (07) : 1000 - 1008