The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms

被引:26
作者
Fountain, Jane E. [1 ]
机构
[1] Univ Massachusetts Amherst, Sch Publ Policy, Amherst, MA 01003 USA
关键词
Digital government; Public management; Systemic racism; Discrimination; Artificial intelligence; Machine learning; Computational algorithms; BIG DATA; HEALTH; SURVEILLANCE; SOCIOLOGY; BIAS;
D O I
10.1016/j.giq.2021.101645
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Computational algorithms and automated decision making systems that include them offer potential to improve public policy and organizations. But computational algorithms based on biased data encode those biases into algorithms, models and their outputs. Systemic racism is institutionalized bias with respect to race, ethnicity and related attributes. Such bias is located in data that encode the results and outputs of decisions that have been discriminatory, in procedures and processes that may intentionally or unintentionally disadvantage people based on race, and in policies that may discriminate by race. Computational algorithms may exacerbate systemic racism if they are not designed, developed, and used-that is, enacted-with attention to identifying and remedying bias specific to race. Advancing social equity in digital governance requires systematic, ongoing efforts to assure that automated decision making systems, and their enactment in complex public organizational arrangements, are free from bias.
引用
收藏
页数:10
相关论文
共 50 条