Developing evaluative judgement for a time of generative artificial intelligence

被引:39
作者
Bearman, Margaret [1 ]
Tai, Joanna [1 ]
Dawson, Phillip [1 ]
Boud, David [1 ,2 ,3 ]
Ajjawi, Rola [1 ]
机构
[1] Deakin Univ, Ctr Res Assessment & Digital Learning CRADLE, Melbourne, Australia
[2] Univ Technol Sydney, Fac Arts & Social Sci, Sydney, Australia
[3] Middlesex Univ, Work & Learning Res Ctr, London, England
关键词
Generative artificial intelligence; evaluative judgement; assessment for learning; higher education; FEEDBACK;
D O I
10.1080/02602938.2024.2335321
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Generative artificial intelligence (AI) has rapidly increased capacity for producing textual, visual and auditory outputs, yet there are ongoing concerns regarding the quality of those outputs. There is an urgent need to develop students' evaluative judgement - the capability to judge the quality of work of self and others - in recognition of this new reality. In this conceptual paper, we describe the intersection between evaluative judgement and generative AI with a view to articulating how assessment practices can help students learn to work productively with generative AI. We propose three foci: (1) developing evaluative judgement of generative AI outputs; (2) developing evaluative judgement of generative AI processes; and (3) generative AI assessment of student evaluative judgements. We argue for developing students' capabilities to identify and calibrate quality of work - uniquely human capabilities at a time of technological acceleration - through existing formative assessment strategies. These approaches circumvent and interrupt students' uncritical usage of generative AI. The relationship between evaluative judgement and generative AI is more than just the application of human judgement to machine outputs. We have a collective responsibility, as educators and learners, to ensure that humans do not relinquish their roles as arbiters of quality.
引用
收藏
页码:893 / 905
页数:13
相关论文
共 37 条
[1]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[2]  
Aoun JE, 2017, ROBOT-PROOF: HIGHER EDUCATION IN THE AGE OF ARTIFICIAL INTELLIGENCE, P1
[3]   The skill content of recent technological change: An empirical exploration [J].
Autor, DH ;
Levy, F ;
Murnane, RJ .
QUARTERLY JOURNAL OF ECONOMICS, 2003, 118 (04) :1279-1333
[4]  
Barnett R, 2017, EDUC SCI, V7, DOI 10.3390/educsci7010038
[5]  
Bearman M., 2020, RE IMAGINING U ASSES, P49, DOI DOI 10.1007/978-3-030-41956-1_5
[6]  
Bearman M., 2018, DEVELOPING EVALUATIV, P147
[7]   Learning to work with the black box: Pedagogy for a world with artificial intelligence [J].
Bearman, Margaret ;
Ajjawi, Rola .
BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2023, 54 (05) :1160-1173
[8]   Discourses of artificial intelligence in higher education: a critical literature review [J].
Bearman, Margaret ;
Ryan, Juliana ;
Ajjawi, Rola .
HIGHER EDUCATION, 2023, 86 (02) :369-385
[9]   Designing assessment in a digital world: an organising framework [J].
Bearman, Margaret ;
Nieminen, Juuso Henrik ;
Ajjawi, Rola .
ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2023, 48 (03) :291-304
[10]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623