Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias

被引:0
|
作者
Kim, Soojong [1 ]
Lee, Joomi [2 ]
Oh, Poong [3 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
[2] Univ Georgia, Athens, GA 30602 USA
[3] Nanyang Technol Univ, Singapore, Singapore
来源
INTERNATIONAL JOURNAL OF COMMUNICATION | 2024年 / 18卷
关键词
automated decision making; artificial intelligence; race; discrimination; bias; fairness; trust; emotion; DISCRIMINATION; CONSEQUENCES; SELF; FAIRNESS; HEALTH;
D O I
暂无
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
There is growing concern regarding the potential for automated decision making to discriminate against certain social groups. However, little is known about how the social identities of people influence their perceptions of biased automated decisions. Focusing on the context of racial disparity, this study examined if individuals' social identities (White vs. people of color [POC]) and social contexts that entail discrimination (discrimination target: the self vs. the other) affect the perceptions of algorithm outcomes. A randomized controlled experiment (N = 604) demonstrated that a participant's social identity significantly moderated the effects of the discrimination target on the perceptions. Among POC participants, algorithms that discriminate against the subject decreased their perceived fairness and trust, whereas among White participants, the opposite patterns were observed. The findings imply that social disparity and inequality and different social groups' lived experiences of the existing discrimination and injustice should be at the center of understanding how people make sense of biased algorithms.
引用
收藏
页码:677 / 699
页数:23
相关论文
共 41 条