Responsible Use of Large Language Models: An Analogy with the Oxford Tutorial System

被引:0
作者
Lissack, Michael [1 ]
Meagher, Brenden [2 ]
机构
[1] Tongji Univ, Coll Design & Innovat, Shanghai, Peoples R China
[2] Vrije Univ, Amsterdam, Netherlands
关键词
responsible AI; Oxford Tutorial; large language models (LLMs); human-AI collaboration; critical thinking;
D O I
10.1016/j.sheji.2024.11.001
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) have emerged as powerful tools with the potential to revolutionize how we process information, generate content, and solve complex problems. However, integrating these sophisticated AI systems into academic and professional practices raises critical questions about responsible use, ethical considerations, and the preservation of human expertise. This article introduces a novel framework for understanding and implementing responsible AI use by drawing an analogy between the optimal use of LLMs and the role of the second student in an Oxford Tutorial. Through an in-depth exploration of the Oxford Tutorial system and its parallels with LLM interaction, we propose a nuanced approach to leveraging AI language models while maintaining human agency, fostering critical thinking, and upholding ethical standards. The article examines the implications of this analogy, discusses potential risks of misuse, and provides detailed practical scenarios across various fields. By grounding the use of cutting-edge AI technology in a well-established and respected educational model, this research contributes to the ongoing discourse on AI ethics. It offers valuable insights for academics, professionals, and policymakers grappling with the challenges and opportunities presented by LLMs.
引用
收藏
页码:389 / 413
页数:25
相关论文
共 26 条
[1]  
Ashwin P., Variation in Students’ Experiences of the ‘Oxford Tutorial, Higher Education, 50, 4, pp. 631-644, (2005)
[2]  
Beck J., Governmental Professionalism: Re-professionalising or De-professionalising Teachers in England?, British Journal of Educational Studies, 56, 2, pp. 119-143, (2008)
[3]  
Beck S., The Teacher's Role and Approaches in a Knowledge Society, Cambridge Journal of Education, 37, 2, pp. 145-159, (2007)
[4]  
Bender E.M., Gebru T., McMillan-Major A., Shmitchell S., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623, (2021)
[5]  
Bostrom N., Yudkowsky E., The Ethics of Artificial Intelligence, The Cambridge Handbook of Artificial Intelligence, pp. 316-334, (2014)
[6]  
(2020)
[7]  
Danaher J., Automation and Utopia: Human Flourishing in a World without Work, (2019)
[8]  
Floridi L., Chiriatti M., GPT-3: Its Nature, Scope, Limits, and Consequences, Minds and Machines, 30, 4, pp. 681-694, (2020)
[9]  
Kasneci E., Sessler K., Kuchemann S., Bannert M., Dementieva D., Fischer F., Gasser U., Et al., ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education, Learning and Individual Differences, 103, (2023)
[10]  
Kovanovic V., Joksimovic S., Poquet O., Hennis T., Cukic I., de Vries P., Hatala M., Dawson S., George S., Gasevic D., Exploring Communities of Inquiry in Massive Open Online Courses, Computers & Education, 119, pp. 44-58, (2018)