Can large language models write reflectively

被引:0
作者
Li Y. [1 ]
Sha L. [1 ]
Yan L. [1 ]
Lin J. [1 ]
Raković M. [1 ]
Galbraith K. [2 ]
Lyons K. [3 ]
Gašević D. [1 ]
Chen G. [1 ]
机构
[1] Centre for Learning Analytics, Monash University
[2] Experiential Development and Graduate Education, Faculty of Pharmacy and Pharmaceutical Sciences, Monash University
[3] Centre for Digital Transformation of Health, University of Melbourne
来源
Computers and Education: Artificial Intelligence | 2023年 / 4卷
关键词
ChatGPT; Generative language model; Natural language processing; Reflective writing;
D O I
10.1016/j.caeai.2023.100140
中图分类号
学科分类号
摘要
Generative Large Language Models (LLMs) demonstrate impressive results in different writing tasks and have already attracted much attention from researchers and practitioners. However, there is limited research to investigate the capability of generative LLMs for reflective writing. To this end, in the present study, we have extensively reviewed the existing literature and selected 9 representative prompting strategies for ChatGPT – the chatbot based on state-of-art generative LLMs to generate a diverse set of reflective responses, which are combined with student-written reflections. Next, those responses were evaluated by experienced teaching staff following a theory-aligned assessment rubric that was designed to evaluate student-generated reflections in several university-level pharmacy courses. Furthermore, we explored the extent to which Deep Learning classification methods can be utilised to automatically differentiate between reflective responses written by students vs. reflective responses generated by ChatGPT. To this end, we harnessed BERT, a state-of-art Deep Learning classifier, and compared the performance of this classifier to the performance of human evaluators and the AI content detector by OpenAI. Following our extensive experimentation, we found that (i) ChatGPT may be capable of generating high-quality reflective responses in writing assignments administered across different pharmacy courses, (ii) the quality of automatically generated reflective responses was higher in all six assessment criteria than the quality of student-written reflections; and (iii) a domain-specific BERT-based classifier could effectively differentiate between student-written and ChatGPT-generated reflections, greatly surpassing (up to 38% higher across four accuracy metrics) the classification performed by experienced teaching staff and general-domain classifier, even in cases where the testing prompts were not known at the time of model training. © 2023 The Authors
引用
收藏
相关论文
共 53 条
[1]  
(2022)
[2]  
Boud D., Keogh R., Walker D., Promoting reflection in learning: A model, Boundaries of adult learning 1, pp. 32-56, (1985)
[3]  
Brown T., Mann B., Ryder N., Subbiah M., Kaplan J.D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Et al., Language models are few-shot learners, Advances in Neural Information Processing Systems, 33, pp. 1877-1901, (2020)
[4]  
Castelvecchi D., Are ChatGPT and AlphaCode going to replace programmers?, Nature, (2022)
[5]  
Cavalcanti A.P., Barbosa A., Carvalho R., Freitas F., Tsai Y.S., Gasevic D., Mello R.F., Automatic feedback in online learning environments: A systematic literature review, Computers and Education: Artificial Intelligence, 2, (2021)
[6]  
Charon R., Hermann M.N., A sense of story, or why teach reflective writing? Academic medicine, Journal of the Association of American Medical Colleges, 87, (2012)
[7]  
Choi J.H., Hickman K.E., Monahan A., Schwarcz D., (2023)
[8]  
Das D., Kumar N., Longjam L.A., Sinha R., Roy A.D., Mondal H., Gupta P., Assessing the capability of chatgpt in answering first- and second-order knowledge questions on microbiology as per competency-based medical education curriculum, Cureus, 15, (2023)
[9]  
Dennison W.F., Kirk R., (1990)
[10]  
Devlin J., Chang M.W., Lee K., Toutanova K., Bert: Pre-training of deep bidirectional transformers for language understanding, (2018)