A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making Systems

被引:8
作者
Pagan, Nicole [1 ]
Baumann, Joachim [1 ,2 ]
Elokda, Ezzat [3 ]
De Pasquale, Giulia [3 ]
Bolognani, Saverio [3 ]
Hannak, Aniko [1 ]
机构
[1] Univ Zurich, Zurich, Switzerland
[2] Zurich Univ Appl Sci, Zurich, Switzerland
[3] Swiss Fed Inst Technol, Zurich, Switzerland
来源
PROCEEDINGS OF 2023 ACM CONFERENCE ON EQUITY AND ACCESS IN ALGORITHMS, MECHANISMS, AND OPTIMIZATION, EAAMO 2023 | 2023年
基金
瑞士国家科学基金会;
关键词
feedback loops; bias; machine learning; performative prediction; dynamical systems theory; sequential decision-making; automated decision-making; FAIRNESS;
D O I
10.1145/3617694.3623227
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Prediction-based decision-making systems are becoming increasingly prevalent in various domains. Previous studies have demonstrated that such systems are vulnerable to runaway feedback loops, e.g., when police are repeatedly sent back to the same neighborhoods regardless of the actual rate of criminal activity, which exacerbate existing biases. In practice, the automated decisions have dynamic feedback effects on the system itself - which in ML literature is sometimes referred to as performative predictions - that can perpetuate over time, making it difficult for short-sighted design choices to control the system's evolution. While researchers started proposing longer-term solutions to prevent adverse outcomes (such as bias towards certain groups), these interventions largely depend on ad hoc modeling assumptions and a rigorous theoretical understanding of the feedback dynamics in ML-based decision-making systems is currently missing. In this paper, we use the language of dynamical systems theory, a branch of applied mathematics that deals with the analysis of the interconnection of systems with dynamic behaviors, to rigorously classify the different types of feedback loops in the ML-based decision-making pipeline. By reviewing existing scholarly work, we show that this classification covers many examples discussed in the algorithmic fairness community, thereby providing a unifying and principled framework to study feedback loops. By qualitative analysis, and through a simulation example of recommender systems, we show which specific types of ML biases are affected by each type of feedback loop. We find that the existence of feedback loops in the ML-based decision-making pipeline can perpetuate, reinforce, or even reduce ML biases.
引用
收藏
页数:14
相关论文
共 80 条
[1]  
Adam GA, 2020, PR MACH LEARN RES, V126, P710
[2]  
Angwin Julia, ProPublica
[3]  
[Anonymous], 2012, Optimal control
[4]  
Astrom KJ, 2010, Feedback Systems: An Introduction for Scientists and Engineers
[5]  
Barocas S., 2019, Fairness and machine learning: Limitations and opportunities
[6]  
Baumann Joachim, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P2315, DOI 10.1145/3531146.3534645
[7]   Bias on Demand: A Modelling Framework That Generates Synthetic Data With Bias [J].
Baumann, Joachim ;
Castelnovo, Alessandro ;
Crupi, Riccardo ;
Inverardi, Nicole ;
Regoli, Daniele .
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, :1002-1013
[8]  
Bechavod Y, 2019, ADV NEUR IN, V32
[9]   Fairness in Criminal Justice Risk Assessments: The State of the Art [J].
Berk, Richard ;
Heidari, Hoda ;
Jabbari, Shahin ;
Kearns, Michael ;
Roth, Aaron .
SOCIOLOGICAL METHODS & RESEARCH, 2021, 50 (01) :3-44
[10]  
Blum A, 2007, ALGORITHMIC GAME THEORY, P79