Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

被引:23
作者
Slack, Dylan [1 ]
Friedler, Sorelle A. [2 ]
Givental, Emile [2 ]
机构
[1] Univ Calif Irvine, Irvine, CA 92717 USA
[2] Haverford Coll, Haverford, PA 19041 USA
来源
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY | 2020年
关键词
machine learning; fairness; meta-learning; covariate shift;
D O I
10.1145/3351095.3372839
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain. The second is a fair meta-learning approach to train models that can be quickly fine-tuned to specific tasks from only a few number of sample instances while balancing fairness and accuracy. We demonstrate experimentally the individual utility of each model using relevant baselines and provide the first experiment to our knowledge of K-shot fairness, i.e. training a fair model on a new task with only K data points. Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.
引用
收藏
页码:200 / 209
页数:10
相关论文
共 38 条
  • [1] Angwin Julia, 2016, MACHINE BIAS
  • [2] [Anonymous], 2013, LJAF RES SUMMARY
  • [3] Barocas S., 2018, Fairness and Machine Learning Limitations and Opportunities
  • [4] Big Data's Disparate Impact
    Barocas, Solon
    Selbst, Andrew D.
    [J]. CALIFORNIA LAW REVIEW, 2016, 104 (03) : 671 - 732
  • [5] Berk Richard, 2017, CONVEX FRAMEWORK FAI
  • [6] Bickel S, 2009, J MACH LEARN RES, V10, P2137
  • [7] Three naive Bayes approaches for discrimination-free classification
    Calders, Toon
    Verwer, Sicco
    [J]. DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) : 277 - 292
  • [8] Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments
    Chouldechova, Alexandra
    [J]. BIG DATA, 2017, 5 (02) : 153 - 163
  • [9] Chouldechova Alexandra., 2018, The frontiers of fairness in machine learning
  • [10] Algorithms in practice: Comparing web journalism and criminal justice
    Christin, Angele
    [J]. BIG DATA & SOCIETY, 2017, 4 (02):