Fairness, Equality, and Power in Algorithmic Decision-Making

被引:57
作者
Kasy, Maximilian [1 ]
Abebe, Rediet [2 ]
机构
[1] Univ Oxford, Dept Econ, Oxford, England
[2] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA USA
来源
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021 | 2021年
关键词
Algorithmic fairness; inequality; power; auditing; empirical economics; DISCRIMINATION; BIAS;
D O I
10.1145/3442188.3445919
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Much of the debate on the impact of algorithms is concerned with fairness, defined as the absence of discrimination for individuals with the same "merit." Drawing on the theory of justice, we argue that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by "merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences. We contrast this fairness-based perspective with two alternate perspectives: the first focuses on inequality and the causal impact of algorithms and the second on the distribution of power. We formalize these perspectives drawing on techniques from causal inference and empirical economics, and characterize when they give divergent evaluations. We present theoretical results and empirical examples which demonstrate this tension. We further use these insights to present a guide for algorithmic auditing and discuss the importance of inequality- and power-centered frameworks in algorithmic decision-making.
引用
收藏
页码:576 / 586
页数:11
相关论文
共 69 条
  • [31] 50 Years of Test (Un)fairness: Lessons for Machine Learning
    Hutchinson, Ben
    Mitchell, Margaret
    [J]. FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, : 49 - 58
  • [32] Imbens GW, 2015, CAUSAL INFERENCE FOR STATISTICS, SOCIAL, AND BIOMEDICAL SCIENCES: AN INTRODUCTION, P1, DOI 10.1017/CBO9781139025751
  • [33] Kallus N, 2020, Arxiv, DOI arXiv:1906.00285
  • [34] Kasy Maximilian, 2015, Review of Economics and Statistics
  • [35] Kearns M, 2018, PR MACH LEARN RES, V80
  • [36] An Empirical Study of Rich Subgroup Fairness for Machine Learning
    Kearns, Michael
    Neel, Seth
    Roth, Aaron
    Wu, Zhiwei Steven
    [J]. FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, : 100 - 109
  • [37] Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice
    Kitagawa, Toru
    Tetenov, Aleksey
    [J]. ECONOMETRICA, 2018, 86 (02) : 591 - 616
  • [38] Kleinberg J, 2016, Arxiv, DOI [arXiv:1609.05807, 10.48550/arXiv.1609.05807]
  • [39] Racial bias in motor vehicle searches: Theory and evidence
    Knowles, J
    Persico, N
    [J]. JOURNAL OF POLITICAL ECONOMY, 2001, 109 (01) : 203 - 229
  • [40] Kolawole M.M., 2002, Agenda, V17, P92, DOI DOI 10.1080/10130950.2002.9676183