Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias

被引:0
|
作者
Kim, Soojong [1 ]
Lee, Joomi [2 ]
Oh, Poong [3 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
[2] Univ Georgia, Athens, GA 30602 USA
[3] Nanyang Technol Univ, Singapore, Singapore
来源
INTERNATIONAL JOURNAL OF COMMUNICATION | 2024年 / 18卷
关键词
automated decision making; artificial intelligence; race; discrimination; bias; fairness; trust; emotion; DISCRIMINATION; CONSEQUENCES; SELF; FAIRNESS; HEALTH;
D O I
暂无
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
There is growing concern regarding the potential for automated decision making to discriminate against certain social groups. However, little is known about how the social identities of people influence their perceptions of biased automated decisions. Focusing on the context of racial disparity, this study examined if individuals' social identities (White vs. people of color [POC]) and social contexts that entail discrimination (discrimination target: the self vs. the other) affect the perceptions of algorithm outcomes. A randomized controlled experiment (N = 604) demonstrated that a participant's social identity significantly moderated the effects of the discrimination target on the perceptions. Among POC participants, algorithms that discriminate against the subject decreased their perceived fairness and trust, whereas among White participants, the opposite patterns were observed. The findings imply that social disparity and inequality and different social groups' lived experiences of the existing discrimination and injustice should be at the center of understanding how people make sense of biased algorithms.
引用
收藏
页码:677 / 699
页数:23
相关论文
共 41 条
  • [1] Polarized platforms? How partisanship shapes perceptions of "algorithmic news bias"
    Calice, Mikhaila N.
    Bao, Luye
    Freiling, Isabelle
    Howell, Emily
    Xenos, Michael A.
    Yang, Shiyu
    Brossard, Dominique
    Newman, Todd P.
    Scheufele, Dietram A.
    NEW MEDIA & SOCIETY, 2023, 25 (11) : 2833 - 2854
  • [2] EXplainable Artificial Intelligence (XAI) for facilitating recognition of algorithmic bias: An experiment from imposed users' perspectives
    Chuan, Ching-Hua
    Sun, Ruoyu
    Tian, Shiyun
    Tsai, Wan-Hsiu Sunny
    TELEMATICS AND INFORMATICS, 2024, 91
  • [3] Artificial intelligence and algorithmic bias? Field tests on social network with teens
    Cecere, G.
    Jean, C.
    Le Guel, F.
    Manant, M.
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2024, 201
  • [4] Algorithmic Political Bias in Artificial Intelligence Systems
    Peters U.
    Philosophy & Technology, 2022, 35 (2)
  • [5] Algorithmic bias in artificial intelligence is a problem-And the root issue is power
    Walker, Rae
    Dillard-Wright, Jess
    Iradukunda, Favorite
    NURSING OUTLOOK, 2023, 71 (05)
  • [6] Does artificial intelligence bias perceptions of environmental challenges?
    van der Ven, Hamish
    Corry, Diego
    Elnur, Rawie
    Provost, Viola Jasmine
    Syukron, Muh
    Tappauf, Niklas
    ENVIRONMENTAL RESEARCH LETTERS, 2025, 20 (01):
  • [7] ARTIFICIAL INTELLIGENCE'S ALGORITHMIC BIAS: ETHICAL AND LEGAL ISSUES
    Kharitonova, Yu S.
    Savina, V. S.
    Pagnini, F.
    VESTNIK PERMSKOGO UNIVERSITETA-JURIDICHESKIE NAUKI, 2021, (03): : 488 - 515
  • [8] Artificial Intelligence in Violence Risk Assessment: Addressing Racial Bias and Inequity
    Ratajczak, Robert
    Cockerill, Richard G.
    JOURNAL OF PSYCHIATRIC PRACTICE, 2023, 29 (03) : 239 - 245
  • [10] Enhancing Clinical Decision Support in Nephrology: Addressing Algorithmic Bias Through Artificial Intelligence Governance
    Goldstein, Benjamin A.
    Mohottige, Dinushika
    Bessias, Sophia
    Cary, Michael P.
    AMERICAN JOURNAL OF KIDNEY DISEASES, 2024, 84 (06) : 780 - 786