Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology

被引:0
作者
Leutz-Schmidt, Patricia [1 ]
Palm, Viktoria [1 ]
Mathy, Rene Michael [1 ]
Groezinger, Martin [2 ]
Kauczor, Hans-Ulrich [1 ]
Jang, Hyungseok [3 ]
Sedaghat, Sam [1 ]
机构
[1] Univ Hosp Heidelberg, Dept Diagnost & Intervent Radiol, D-69120 Heidelberg, Germany
[2] German Canc Res Ctr, D-69120 Heidelberg, Germany
[3] Univ Calif Davis, Dept Radiol, Davis, CA 95616 USA
关键词
large language models; chatbot; ChatGPT; Gemini; radiology; management; LEADERSHIP;
D O I
10.3390/diagnostics15040497
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.
引用
收藏
页数:13
相关论文
共 31 条
  • [1] Pavli A., Theodoridou M., Maltezou H.C., Post-COVID Syndrome: Incidence, Clinical Spectrum, and Challenges for Primary Healthcare Professionals, Arch. Med. Res, 52, pp. 575-581, (2021)
  • [2] Shaheen M.Y., Applications of Artificial Intelligence (AI) in healthcare: A review, Sci. Prepr, (2021)
  • [3] Thomson N.B., Rawson J.V., Slade C.P., Bledsoe M., Transformation and Transformational Leadership: A Review of the Current and Relevant Literature for Academic Radiologists, Acad. Radiol, 23, pp. 592-599, (2016)
  • [4] Sedaghat S., The Future Role of Radiologists in the Artificial Intelligence-Driven Hospital, Ann. Biomed. Eng, 52, pp. 2316-2318, (2024)
  • [5] Sedaghat S., Success Through Simplicity: What Other Artificial Intelligence Applications in Medicine Should Learn from History and ChatGPT, Ann. Biomed. Eng, 51, pp. 2657-2658, (2023)
  • [6] Clusmann J., Kolbinger F.R., Muti H.S., Carrero Z.I., Eckardt J.-N., Laleh N.G., Loffler C.M.L., Schwarzkopf S.-C., Unger M., Veldhuizen G.P., Et al., The future landscape of large language models in medicine, Commun. Med, 3, (2023)
  • [7] Sedaghat S., Early applications of ChatGPT in medical practice, education and research, Clin. Med, 23, pp. 278-279, (2023)
  • [8] Lau L., Leadership and management in quality radiology, Biomed. Imaging Interv. J, 3, (2007)
  • [9] Sedaghat S., Large Language Model-Based Chatbots Like ChatGPT for Accessing Basic Leadership Education in Radiology, Acad. Radiol, 31, pp. 4296-4297, (2024)
  • [10] Buijs E., Maggioni E., Mazziotta F., Lega F., Carrafiello G., Clinical impact of AI in radiology department management: A systematic review, Radiol. Medica, 129, pp. 1656-1666, (2024)