ChatGPT in medical imaging higher education

被引:75
作者
Currie, G. [1 ,3 ]
Singh, C. [1 ]
Nelson, T. [2 ]
Nabasenja, C. [2 ]
Al-Hayek, Y. [1 ]
Spuur, K. [1 ]
机构
[1] Charles Sturt Univ, Wagga Wagga, NSW, Australia
[2] Charles Sturt Univ, Port Macquarie, NSW, Australia
[3] Charles Sturt Univ, Sch Dent & Med Sci, Locked Bag 588, Wagga Wagga, NSW 2678, Australia
关键词
ChatGPT; Artificial intelligence; Higher education; Academic integrity; Generative algorithms; Language model;
D O I
10.1016/j.radi.2023.05.011
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Introduction: Academic integrity among radiographers and nuclear medicine technologists/scientists in both higher education and scientific writing has been challenged by advances in artificial intelligence (AI). The recent release of ChatGPT, a chatbot powered by GPT-3.5 capable of producing accurate and human-like responses to questions in real-time, has redefined the boundaries of academic and scientific writing. These boundaries require objective evaluation. Method: ChatGPT was tested against six subjects across the first three years of the medical radiation science undergraduate course for both exams (n = 6) and written assignment tasks (n = 3). ChatGPT submissions were marked against standardised rubrics and results compared to student cohorts. Submissions were also evaluated by Turnitin for similarity and AI scores.Results: ChatGPT powered by GPT-3.5 performed below the average student performance in all written tasks with an increasing disparity as subjects advanced. ChatGPT performed better than the average student in foundation or general subject examinations where shallow responses meet learning outcomes. For discipline specific subjects, ChatGPT lacked the depth, breadth, and currency of insight to provide pass level answers.Conclusion: ChatGPT simultaneously poses a risk to academic integrity in writing and assessment while affording a tool for enhanced learning environments. These risks and benefits are likely to be restricted to learning outcomes of lower taxonomies. Both risks and benefits are likely to be constrained by higher order taxonomies. Implications for practice: ChatGPT powered by GPT3.5 has limited capacity to support student cheating, introduces errors and fabricated information, and is readily identified by software as AI generated. Lack of depth of insight and appropriateness for professional communication also limits capacity as a learning enhancement tool.
引用
收藏
页码:792 / 799
页数:8
相关论文
共 7 条
[1]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[2]  
Awdry R, 2022, J ACAD ETHICS, V8, P1
[3]  
Choi JH, 2022, J LEGAL EDUC, V71, P387
[4]  
Falleur D, 1990, J Allied Health, V19, P313
[5]  
Gravel J, 2023, medRxiv, DOI [10.1101/2023.03.16.23286914, 10.1101/2023.03.16.23286914]
[6]   Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models [J].
Kung, Tiffany H. ;
Cheatham, Morgan ;
Medenilla, Arielle ;
Sillos, Czarina ;
De Leon, Lorie ;
Elepano, Camille ;
Madriaga, Maria ;
Aggabao, Rimel ;
Diaz-Candido, Giezel ;
Maningo, James ;
Tseng, Victor .
PLOS DIGITAL HEALTH, 2023, 2 (02)