The flaws of policies requiring human oversight of government algorithms

被引:57
作者
Green, Ben [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI USA
关键词
Human oversight; Human in the loop; Algorithmic governance; Automated decision-making; Artificial intelligence; AI regulation; Human-algorithm interactions; RISK-ASSESSMENT; AUTOMATION; FORECASTS; PRIVACY; BIAS;
D O I
10.1016/j.clsr.2022.105681
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic review and approval before the agency can adopt the algorithm.
引用
收藏
页数:22
相关论文
共 145 条
  • [1] Albright A., 2019, Working paper .
  • [2] Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions
    Alkhatib, Ali
    Bernstein, Michael
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [3] Allegheny County Department of Human Services, 2019, FREQUENTLY ASKED QUE
  • [4] Allegheny County Department of Human Services, 2019, IMP EV SUMM ALL FAM
  • [5] Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective
    Allhutter, Doris
    Cech, Florian
    Fischer, Fabian
    Grill, Gabriel
    Mager, Astrid
    [J]. FRONTIERS IN BIG DATA, 2020, 3
  • [6] AMNESTY INTERNATIONAL, 2021, BAN SCAN
  • [7] Andrews D., 1995, LEVEL SERVICE INVENT
  • [8] Angwin J., 2022, Ethics of Data and Analytics, P254
  • [9] [Anonymous], 2013, CBS News
  • [10] [Anonymous], 2018, Guidelines on Automated individual decisionmaking and Profiling for the purposes of Regulation 2016/ 679 17/EN WP251rev.01