To learn or not learn from AI? Unpacking the effects of feedback valence on novel insights recall

被引:0
|
作者
Turel, Ofir [1 ]
机构
[1] Univ Melbourne, Comp & Informat Syst, Carlton, VIC, Australia
关键词
Artificial intelligence; AI; learning; feedback; pedagogical agents; SELF-ESTEEM; STATE; AGGRESSION; AGENTS; ANGER; RESPONSIBILITY; ATTRIBUTIONS; SCAPEGOATS; APPRAISAL; SYSTEMS;
D O I
10.1080/0960085X.2024.2426473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence (AI) tools are unique pedagogical agents, in that (1) they can generate novel insights, and (2) they tend to be treated differently by humans, because they are opaquer and more intelligent than traditional technologies. Thus, it is not clear if and how people learn novel insights through AI feedback, and if the learning process and outcomes differ compared to when the source of insights and feedback is humans. Here, we seek to address this gap with four preregistered experiments (total n=1,766). They use a learning paradigm in which we manipulate the feedback source, feedback valence, and test instructions. The key finding is that people learn novel insights better when they receive negative feedback from AI than from humans. There is no difference in recall for positive feedback. This implies that AI as pedagogical agents are likely to outperform human feedback sources (peers or experts), or at least, be no worse. Our results further reveal that (1) people present typical human-human learning biases when receiving negative feedback from AI, namely mnemic neglect, and (2) ego bruising (as manifested in reduction in state self-esteem) and anger mediate such effects when receiving negative feedback from AI.
引用
收藏
页数:22
相关论文
共 40 条
  • [1] Do learners learn from corrective peer feedback? Insights from students
    Jiang, Xue
    Ironsi, Sarah Solomon
    STUDIES IN EDUCATIONAL EVALUATION, 2024, 83
  • [2] AI and cognitive science can learn alot from 'language learning'
    Fass, Leona F.
    Proceedings of the Second IASTED International Conference on Computational Intelligence, 2006, : 349 - 354
  • [3] Watch and Learn: Optimizing from Revealed Preferences Feedback
    Roth, Aaron
    Ullman, Jonathan
    Wu, Zhiwei Steven
    STOC'16: PROCEEDINGS OF THE 48TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2016, : 949 - 962
  • [4] The effects of feedback on willingness to learn and the propensity to move of employees and students
    Genkova, Petia
    Gassel, Gwenda
    GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE, 2024, 55 (01): : 89 - 102
  • [5] Unpacking Human and AI Complementarity: Insights from Recent Works
    Ren Y.
    Deng X.N.
    Joshi K.D.
    Data Base Adv. Info. Sys., 2023, 3 (6-10): : 6 - 10
  • [6] Unpacking Human and AI Complementarity: Insights from Recent Works
    Ren, Yuqing
    Deng, Xuefei
    Joshi, K. D.
    DATA BASE FOR ADVANCES IN INFORMATION SYSTEMS, 2023, 54 (03): : 6 - 10
  • [7] Rogue AI: Cautionary Cases in Neuroradiology and What We Can Learn From Them
    Young, Austin
    Tan, Kevin
    Tariq, Faiq
    Fin, Michael X.
    Bluestone, Avraham Y.
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (03)
  • [8] Preschoolers and Adults Learn From Novel Metaphors
    Zhu, Rebecca
    Gopnik, Alison
    PSYCHOLOGICAL SCIENCE, 2023, 34 (06) : 696 - 704
  • [9] Can We Learn From Trust in Simulation to Gain Trust in AI?
    Tolk, Andreas
    ERGONOMICS IN DESIGN, 2024,