Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML

被引:0
|
作者
Weerts, Hilde [1 ]
Pfisterer, Florian [2 ,3 ]
Feurer, Matthias [4 ]
Eggensperger, Katharina [4 ,5 ]
Bergman, Edward [4 ]
Awad, Noor [4 ]
Vanschoren, Joaquin [1 ]
Pechenizkiy, Mykola [1 ]
Bischl, Bernd [2 ,3 ]
Hutter, Frank [4 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Ludwig Maximilians Univ Munchen, Munich, Germany
[3] Munich Ctr Machine Learning, Munich, Germany
[4] Albert Ludwigs Univ Freiburg, Freiburg, Germany
[5] Univ Tubingen, Tubingen, Germany
基金
欧洲研究理事会;
关键词
BIAS; OPTIMIZATION; EFFICIENT; ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness -related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness -aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness -related harm can arise and the ensuing implications for the design of fairness -aware AutoML. We conclude that while fairness cannot be automated, fairness -aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user -centered assistive systems designed to tackle challenges encountered in fairness work.
引用
收藏
页码:639 / 677
页数:39
相关论文
共 50 条
  • [1] Fairness-Aware Programming
    Albarghouthi, Aws
    Vinitsky, Samuel
    FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, : 211 - 219
  • [2] Fairness-Aware PageRank
    Tsioutsiouliklis, Sotiris
    Pitoura, Evaggelia
    Tsaparas, Panayiotis
    Kleftakis, Ilias
    Mamoulis, Nikos
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 3815 - 3826
  • [3] The Independence of Fairness-aware Classifiers
    Kamishima, Toshihiro
    Akaho, Shotaro
    Asoh, Hideki
    Sakuma, Jun
    2013 IEEE 13TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2013, : 849 - 858
  • [4] Distributional Fairness-aware Recommendation
    Yang, Hao
    Wu, Xian
    Qiu, Zhaopeng
    Zheng, Yefeng
    Chen, Xu
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (05)
  • [5] Fairness-Aware Process Mining
    Qafari, Mahnaz Sadat
    van der Aalst, Wil
    ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS: OTM 2019 CONFERENCES, 2019, 11877 : 182 - 192
  • [6] Fairness-aware Data Integration
    Mazilu, Lacramioara
    Paton, Norman W.
    Konstantinou, Nikolaos
    Fernandes, Alvaro A. A.
    ACM JOURNAL OF DATA AND INFORMATION QUALITY, 2022, 14 (04):
  • [7] A Personalized Automated Bidding Framework for Fairness-aware Online Advertising
    Zhang, Haoqi
    Niu, Lvyin
    Zheng, Zhenzhe
    Zhang, Zhilin
    Gu, Shan
    Wu, Fan
    Yu, Chuan
    Xu, Jian
    Chen, Guihai
    Zheng, Bo
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5544 - 5553
  • [8] Considerations on Fairness-aware Data Mining
    Kamishima, Toshihiro
    Akaho, Shotaro
    Asoh, Hideki
    Sakuma, Jun
    12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2012), 2012, : 378 - 385
  • [9] Towards Fairness-Aware Federated Learning
    Shi, Yuxin
    Yu, Han
    Leung, Cyril
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11922 - 11938
  • [10] FairCF: fairness-aware collaborative filtering
    Pengyang Shao
    Le Wu
    Lei Chen
    Kun Zhang
    Meng Wang
    Science China Information Sciences, 2022, 65