How do the existing fairness metrics and unfairness mitigation algorithms contribute to ethical learning analytics?

被引:32
作者
Deho, Oscar Blessed [1 ]
Zhan, Chen [1 ]
Li, Jiuyong [1 ]
Liu, Jixue [1 ]
Liu, Lin [1 ]
Thuc Duy Le [1 ]
机构
[1] Univ South Australia, UniSA STEM, Adelaide, SA, Australia
基金
澳大利亚研究理事会;
关键词
ethical LA; fairness; learning analytics; predictive modelling; virtual learning environment; DISCRIMINATION; PREDICTION;
D O I
10.1111/bjet.13217
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
With the widespread use of learning analytics (LA), ethical concerns about fairness have been raised. Research shows that LA models may be biased against students of certain demographic subgroups. Although fairness has gained significant attention in the broader machine learning (ML) community in the last decade, it is only recently that attention has been paid to fairness in LA. Furthermore, the decision on which unfairness mitigation algorithm or metric to use in a particular context remains largely unknown. On this premise, we performed a comparative evaluation of some selected unfairness mitigation algorithms regarded in the fair ML community to have shown promising results. Using a 3-year program dropout data from an Australian university, we comparatively evaluated how the unfairness mitigation algorithms contribute to ethical LA by testing for some hypotheses across fairness and performance metrics. Interestingly, our results show how data bias does not always necessarily result in predictive bias. Perhaps not surprisingly, our test for fairness-utility tradeoff shows how ensuring fairness does not always lead to drop in utility. Indeed, our results show that ensuring fairness might lead to enhanced utility under specific circumstances. Our findings may to some extent, guide fairness algorithm and metric selection for a given context.
引用
收藏
页码:822 / 843
页数:22
相关论文
共 50 条
[1]  
Angwin J., 2022, Ethics of data and analytics, P254
[2]  
Bache K., 2013, UCI machine learning repository
[3]  
Baker R.S. J. d., 2010, Proceedings of the 3rd International Conference on Educational Data Mining, P11
[4]   Algorithmic Bias in Education [J].
Baker, Ryan S. ;
Hawn, Aaron .
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2022, 32 (04) :1052-1092
[5]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[6]   Fairness in Criminal Justice Risk Assessments: The State of the Art [J].
Berk, Richard ;
Heidari, Hoda ;
Jabbari, Shahin ;
Kearns, Michael ;
Roth, Aaron .
SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) :3-44
[7]  
Blanchard Emmanuel G., 2012, Intelligent Tutoring Systems. Proceedings 11th International Conference (ITS 2012), P280, DOI 10.1007/978-3-642-30950-2_36
[8]   Three naive Bayes approaches for discrimination-free classification [J].
Calders, Toon ;
Verwer, Sicco .
DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) :277-292
[9]  
Calmon FP, 2017, ADV NEUR IN, V30
[10]   Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees [J].
Celis, L. Elisa ;
Huang, Lingxiao ;
Keswani, Vijay ;
Vishnoi, Nisheeth K. .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :319-328