Algorithmic Transference: People Overgeneralize Failures of AI in the Government

被引:33
作者
Longoni, Chiara [1 ]
Cian, Luca [2 ]
Kyung, Ellie J. [3 ]
机构
[1] Boston Univ, Mkt, Questrom Sch Business, Boston, MA 02215 USA
[2] Univ Virginia, Business Adm, Darden Sch Business, Charlottesville, VA 22903 USA
[3] Babson Coll, Mkt Div, Babson Pk, MA 02157 USA
关键词
algorithms; artificial intelligence; social categorization; social impact; government; public policy; SOCIAL IDENTITY; VARIABILITY; PERCEPTION; RESPONSES; MACHINES; AVERSION; GENDER;
D O I
10.1177/00222437221110139
中图分类号
F [经济];
学科分类号
02 ;
摘要
Artificial intelligence (AI) is pervading the government and transforming how public services are provided to consumers across policy areas spanning allocation of government benefits, law enforcement, risk monitoring, and the provision of services. Despite technological improvements, AI systems are fallible and may err. How do consumers respond when learning of AI failures? In 13 preregistered studies (N = 3,724) across a range of policy areas, the authors show that algorithmic failures are generalized more broadly than human failures. This effect is termed "algorithmic transference" as it is an inferential process that generalizes (i.e., transfers) information about one member of a group to another member of that same group. Rather than reflecting generalized algorithm aversion, algorithmic transference is rooted in social categorization: it stems from how people perceive a group of AI systems versus a group of humans. Because AI systems are perceived as more homogeneous than people, failure information about one AI algorithm is transferred to another algorithm to a greater extent than failure information about a person is transferred to another person. Capturing AI's impact on consumers and societies, these results show how the premature or mismanaged deployment of faulty AI technologies may undermine the very institutions that AI systems are meant to modernize.
引用
收藏
页码:170 / 188
页数:19
相关论文
共 50 条
  • [32] Recorded Business Meetings and AI Algorithmic Tools: Negotiating Privacy Concerns, Psychological Safety, and Control
    Cardon, Peter W.
    Ma, Haibing
    Fleischmann, Carolin
    INTERNATIONAL JOURNAL OF BUSINESS COMMUNICATION, 2023, 60 (04) : 1095 - 1122
  • [33] AI Privacy Opinions between US and Chinese People
    Xing, Yunfei
    He, Wu
    Zhang, Justin Zuopeng
    Cao, Gaohui
    JOURNAL OF COMPUTER INFORMATION SYSTEMS, 2023, 63 (03) : 492 - 506
  • [34] Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
    Lima, Gabriel
    Grgic-Hlaca, Nina
    Cha, Meeyoung
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,
  • [35] Algorithmic bias in data-driven innovation in the age of AI
    Akter, Shahriar
    McCarthy, Grace
    Sajib, Shahriar
    Michael, Katina
    Dwivedi, Yogesh K.
    D'Ambra, John
    Shen, K. N.
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2021, 60 (60)
  • [36] Algorithmic solutions, subjectivity and decision errors: a study of AI accountability
    Biju, P. R.
    Gayathri, O.
    DIGITAL POLICY REGULATION AND GOVERNANCE, 2024,
  • [37] Addressing Algorithmic Bias in AI-Driven Customer Management
    Akter, Shahriar
    Dwivedi, Yogesh K.
    Biswas, Kumar
    Michael, Katina
    Bandara, Ruwan J.
    Sajib, Shahriar
    JOURNAL OF GLOBAL INFORMATION MANAGEMENT, 2021, 29 (06)
  • [38] The impact of lay beliefs about AI on adoption of algorithmic advice
    von Walter, Benjamin
    Kremmel, Dietmar
    Jaeger, Bruno
    MARKETING LETTERS, 2022, 33 (01) : 143 - 155
  • [39] Risk and the future of AI: Algorithmic bias, data colonialism, and marginalization
    Arora, A.
    Barrett, M.
    Lee, E.
    Oborn, E.
    Prince, K.
    INFORMATION AND ORGANIZATION, 2023, 33 (03)
  • [40] The impact of lay beliefs about AI on adoption of algorithmic advice
    Benjamin von Walter
    Dietmar Kremmel
    Bruno Jäger
    Marketing Letters, 2022, 33 : 143 - 155