Investigating perceived fairness of AI prediction system for math learning: A mixed-methods study with college students

被引:0
作者
Song, Yukyeong [1 ]
Li, Chenglu [2 ]
Xing, Wanli [1 ,3 ]
Lyu, Bailing [2 ]
Zhu, Wangda [1 ]
机构
[1] Univ Florida, Coll Educ, Sch Teaching & Learning, Gainesville, FL 32611 USA
[2] Univ Utah, Coll Educ, Educ Psychol, Salt Lake City, UT 84112 USA
[3] 1221 SW 5th Ave, Gainesville, FL 32601 USA
关键词
Perceived fairness; Algorithmic bias; Transparency; AI fairness; AI decision-making; ORGANIZATIONAL JUSTICE; EYE-TRACKING; ANXIETY; EXPECTATIONS; INDIVIDUALS; DECISION; QUALITY;
D O I
10.1016/j.iheduc.2025.101000
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Entities such as governments and universities have begun using AI for algorithmic decision-making that impacts people's lives. Despite their known benefits, such as efficiency, the public has raised concerns about the fairness of AI's decision-making. Here, the concept of perceived fairness, defined as people's emotional, cognitive, and behavioral responses toward the justice of the AI system, has been widely discussed as one of the important factors in determining technology acceptance. In the field of AI in education, students are among the biggest stakeholders; thus, it is important to consider students' perceived fairness of AI decision-making systems to gauge technology acceptance. This study adopted an explanatory sequential mixed-method research design involving 428 college students to investigate the factors that impact students' perceived fairness of AI's pass-or-fail prediction decisions in the context of math learning and suggest ways to improve the perceived fairness based on students' voices. The findings suggest that students who received a favorable prediction outcome (i.e., pass), who were presented with a system that had a lower algorithmic bias and higher transparency, who major(ed) in STEM (vs. non-STEM), who have higher math anxiety, and who received the outcome that matches their math knowledge level (i.e., accurate) tend to report a higher level of perceived fairness for the AI's prediction decisions. Interesting interaction effects were also found regarding decision-making, students' math anxiety and knowledge, and the outcome's favorability on students' perceived fairness. Qualitative thematic analysis revealed students' strong desire for transparency with guidance, explainability, and interactive communication with the AI system, as well as constructive feedback and emotional support. This study contributes to the development of a justice theory in the era of AI and suggests practical design implications for AI systems and communication strategies with AI systems in education.
引用
收藏
页数:16
相关论文
共 89 条
  • [1] TOWARD AN UNDERSTANDING OF INEQUITY
    ADAMS, JS
    [J]. JOURNAL OF ABNORMAL PSYCHOLOGY, 1963, 67 (05) : 422 - &
  • [2] Investigating Perceptions of AI-Based Decision Making in Student Success Prediction
    Afrin, Farzana
    Hamilton, Margaret
    Thevathyan, Charles
    Majrashi, Khalid
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS AND DOCTORAL CONSORTIUM, PT II, 2022, 13356 : 315 - 319
  • [3] Ajunwa I., 2016, Hiring by algorithm: predicting and preventing disparate impact
  • [4] Akhtar Tariq., 2013, World Applied Sciences Journal, V24, P507
  • [5] Algorithm Bias and Perceived Fairness: A Comprehensive Scoping Review
    Amirhossein, Hajigholam Saryazdi
    [J]. PROCEEDINGS OF THE 2024 COMPUTERS AND PEOPLE RESEARCH CONFERENCE, SIGMIS-CPR 2024, 2024,
  • [6] Fairness and Explanation in AI-Informed Decision Making
    Angerschmid, Alessa
    Zhou, Jianlong
    Theuermann, Kevin
    Chen, Fang
    Holzinger, Andreas
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (02): : 556 - 579
  • [7] Algorithmic Bias in Education
    Baker, Ryan S.
    Hawn, Aaron
    [J]. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2022, 32 (04) : 1052 - 1092
  • [8] The influence of people's culture and prior experiences with Aibo on their attitude towards robots
    Bartneck, Christoph
    Suzuki, Tomohiro
    Kanda, Takayuki
    Nomura, Tatsuya
    [J]. AI & SOCIETY, 2007, 21 (1-2) : 217 - 230
  • [9] Two threats to the common good: Self-interested egoism and empathy-induced altruism
    Batson, CD
    Ahmad, N
    Yin, J
    Bedell, SJ
    Johnson, JW
    Templin, CM
    Whiteside, A
    [J]. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1999, 25 (01) : 3 - 16
  • [10] Beretta B, 2024, The impact of AI literacy on the emotional reaction to AI mistakes in human-AI collaboration