Implementing equitable and intersectionality-aware ML in education: A practical guide

被引:4
作者
Mangal, Mudit [1 ]
Pardos, Zachary A. [2 ,3 ]
机构
[1] Univ Calif Berkeley, Sch Informat, Berkeley, CA USA
[2] Univ Calif Berkeley, Sch Educ, Berkeley, CA USA
[3] Univ Calif Berkeley, 2121 Berkeley Way, Berkeley, CA 94720 USA
关键词
algorithmic fairness; educational decision support systems; equity framework; institutional values; intersectionality; ML in education; FAIRNESS; BIAS;
D O I
10.1111/bjet.13484
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairndess) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race-gender and race-income.
引用
收藏
页码:2003 / 2038
页数:36
相关论文
共 98 条
[1]  
Anderson Henry, 2019, PROCEEDING 12 INT, P488
[2]  
Angwin J., 2022, Ethics of data and analytics
[3]   Algorithmic Bias in Education [J].
Baker, Ryan S. ;
Hawn, Aaron .
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2022, 32 (04) :1052-1092
[4]   The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice [J].
Bell, Andrew ;
Bynum, Lucius ;
Drushchak, Nazarii ;
Rosenblatt, Lucas ;
Zakharchenko, Tetiana ;
Stoyanovich, Julia .
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, :400-422
[5]   Fairness in Criminal Justice Risk Assessments: The State of the Art [J].
Berk, Richard ;
Heidari, Hoda ;
Jabbari, Shahin ;
Kearns, Michael ;
Roth, Aaron .
SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) :3-44
[6]  
Beutel Alex, 2017, arXiv
[7]  
Biden JR, 2023, Executive order on the safe, secure, and trustworthy development and use of artificial intelligence
[8]   Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges [J].
Bischl, Bernd ;
Binder, Martin ;
Lang, Michel ;
Pielok, Tobias ;
Richter, Jakob ;
Coors, Stefan ;
Thomas, Janek ;
Ullmann, Theresa ;
Becker, Marc ;
Boulesteix, Anne-Laure ;
Deng, Difan ;
Lindauer, Marius .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 13 (02)
[9]  
Buolamwini J., 2018, P 1 C FAIRNESS ACCOU, V81
[10]   THE INVISIBILITY OF PRIVILEGE: A CRITIQUE OF INTERSECTIONAL MODELS OF IDENTITY [J].
Carastathis, Anna .
ATELIERS DE L ETHIQUE-THE ETHICS FORUM, 2008, 3 (02) :23-38