Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology

被引:0
作者
Leutz-Schmidt, Patricia [1 ]
Palm, Viktoria [1 ]
Mathy, Rene Michael [1 ]
Groezinger, Martin [2 ]
Kauczor, Hans-Ulrich [1 ]
Jang, Hyungseok [3 ]
Sedaghat, Sam [1 ]
机构
[1] Univ Hosp Heidelberg, Dept Diagnost & Intervent Radiol, D-69120 Heidelberg, Germany
[2] German Canc Res Ctr, D-69120 Heidelberg, Germany
[3] Univ Calif Davis, Dept Radiol, Davis, CA 95616 USA
关键词
large language models; chatbot; ChatGPT; Gemini; radiology; management; LEADERSHIP;
D O I
10.3390/diagnostics15040497
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.
引用
收藏
页数:13
相关论文
共 31 条
  • [21] Schmidt S., Zimmerer A., Cucos T., Feucht M., Navas L., Simplifying radiologic reports with natural language processing: A novel approach using ChatGPT in enhancing patient understanding of MRI results, Arch. Orthop. Trauma Surg, 144, pp. 611-618, (2024)
  • [22] Gordon E.B., Towbin A.J., Wingrove P., Shafique U., Haas B., Kitts A.B., Feldman J., Furlan A., Enhancing Patient Communication with Chat-GPT in Radiology: Evaluating the Efficacy and Readability of Answers to Common Imaging-Related Questions, J. Am. Coll. Radiol, 21, pp. 353-359, (2024)
  • [23] Hu Y., Hu Z., Liu W., Gao A., Wen S., Liu S., Lin Z., Exploring the potential of ChatGPT as an adjunct for generating diagnosis based on chief complaint and cone beam CT radiologic findings, BMC Med. Inform. Decis. Mak, 24, (2024)
  • [24] Sedaghat S., Future potential challenges of using large language models like ChatGPT in daily medical practice, J. Am. Coll. Radiol, 21, pp. 344-345, (2023)
  • [25] Temperley H.C., O'Sullivan N.J., Mac Curtain B.M., Corr A., Meaney J.F., Kelly M.E., Brennan I., Current applications and future potential of ChatGPT in radiology: A systematic review, J. Med. Imaging Radiat. Oncol, 68, pp. 257-264, (2024)
  • [26] Sedaghat S., Plagiarism and Wrong Content as Potential Challenges of Using Chatbots Like ChatGPT in Medical Research, J. Acad. Ethics, pp. 1-4, (2024)
  • [27] Ullah E., Parwani A., Baig M.M., Singh R., Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology—A recent scoping review, Diagn. Pathol, 19, (2024)
  • [28] Shen Y., Heacock L., Elias J., Hentel K.D., Reig B., Shih G., Moy L., ChatGPT and Other Large Language Models Are Double-edged Swords, Radiology, 307, (2023)
  • [29] Li H., Moon J.T., Iyer D., Balthazar P., Krupinski E.A., Bercu Z.L., Newsome J.M., Banerjee I., Gichoya J.W., Trivedi H.M., Decoding radiology reports: Potential application of OpenAI ChatGPT to enhance patient understanding of diagnostic reports, Clin. Imaging, 101, pp. 137-141, (2023)
  • [30] Shahsavar Y., Choudhury A., User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study, JMIR Hum. Factors, 10, (2023)