Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML

被引:0
作者
Weerts, Hilde [1 ]
Pfisterer, Florian [2 ,3 ]
Feurer, Matthias [4 ]
Eggensperger, Katharina [4 ,5 ]
Bergman, Edward [4 ]
Awad, Noor [4 ]
Vanschoren, Joaquin [1 ]
Pechenizkiy, Mykola [1 ]
Bischl, Bernd [2 ,3 ]
Hutter, Frank [4 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Ludwig Maximilians Univ Munchen, Munich, Germany
[3] Munich Ctr Machine Learning, Munich, Germany
[4] Albert Ludwigs Univ Freiburg, Freiburg, Germany
[5] Univ Tubingen, Tubingen, Germany
基金
欧洲研究理事会;
关键词
BIAS; OPTIMIZATION; EFFICIENT; ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness -related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness -aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness -related harm can arise and the ensuing implications for the design of fairness -aware AutoML. We conclude that while fairness cannot be automated, fairness -aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user -centered assistive systems designed to tackle challenges encountered in fairness work.
引用
收藏
页码:639 / 677
页数:39
相关论文
共 146 条
[1]  
Agarwal A, 2018, 35 INT C MACHINE LEA, V80
[2]  
Agrawal Ashrya, 2020, arXiv
[3]  
Angwin J., 2016, ProPublica
[4]  
[Anonymous], 2019, Fairness and machine learning: Limitations and opportunities
[5]  
Bansal A., 2022, P NEURAL INFORM PROC
[6]  
Bao M., 2021, P NEURAL INFORM PROC
[7]   Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs [J].
Barocas, Solon ;
Guo, Anhong ;
Kamar, Ece ;
Krones, Jacquelyn ;
Morris, Meredith Ringel ;
Vaughan, Jennifer Wortman ;
Wadsworth, W. Duncan ;
Wallach, Hanna .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :368-378
[8]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[9]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[10]  
Benmeziane H., 2021, arXiv