In AI we trust? Citizen perceptions of AI in government decision making

被引:41
|
作者
Ingrams, Alex [1 ]
Kaufmann, Wesley [2 ]
Jacobs, Daan [2 ]
机构
[1] Leiden Univ, Inst Publ Adm, Leiden, Netherlands
[2] Tilburg Univ, Dept Publ Law & Governance, Tilburg, Netherlands
来源
POLICY AND INTERNET | 2022年 / 14卷 / 02期
关键词
artificial intelligence; decision making; trust; red tape; ARTIFICIAL-INTELLIGENCE; RED TAPE; BIG DATA; TECHNOLOGY; DISCRETION; MANAGEMENT; SERVICES; BENEFITS; TRUSTWORTHINESS; DETERMINANTS;
D O I
10.1002/poi3.276
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Using a survey experiment on the topic of tax auditing, we investigate artificial intelligence (AI) use in government decision making through the lenses of citizen red tape and trust. We find that individuals consider an AI-led decision to be lower in red tape and trustworthiness than a decision by a human. We also find that highly complex tasks produce decisions with higher levels of perceived red tape, but that this effect does not vary according to whether the task is AI- or human-led. We argue that researchers and practitioners give more attention to the balance of instrumental and value-based qualities in the design and implementation of AI applications.
引用
收藏
页码:390 / 409
页数:20
相关论文
共 50 条
  • [1] In AI we trust? Perceptions about automated decision-making by artificial intelligence
    Theo Araujo
    Natali Helberger
    Sanne Kruikemeier
    Claes H. de Vreese
    AI & SOCIETY, 2020, 35 : 611 - 623
  • [2] In AI we trust? Perceptions about automated decision-making by artificial intelligence
    Araujo, Theo
    Helberger, Natali
    Kruikemeier, Sanne
    de Vreese, Claes H.
    AI & SOCIETY, 2020, 35 (03) : 611 - 623
  • [3] Human Control and Discretion in AI-driven Decision-making in Government
    Mitrou, Lilian
    Janssen, Marijn
    Loukis, Euripidis
    14TH INTERNATIONAL CONFERENCE ON THEORY AND PRACTICE OF ELECTRONIC GOVERNANCE (ICEGOV 2021), 2021, : 10 - 16
  • [4] Institutional factors driving citizen perceptions of AI in government: Evidence from a survey experiment on policing
    Schiff, Kaylyn Jackson
    Schiff, Daniel S.
    Adams, Ian T.
    McCrain, Joshua
    Mourtgos, Scott M.
    PUBLIC ADMINISTRATION REVIEW, 2025, 85 (02) : 451 - 467
  • [5] Institutional factors driving citizen perceptions of AI in government: Evidence from a survey experiment on policing
    Schiff, Kaylyn Jackson
    Schiff, Daniel S.
    Adams, Ian T.
    Mccrain, Joshua
    Mourtgos, Scott M.
    PUBLIC ADMINISTRATION REVIEW, 2023, : 451 - 467
  • [6] Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers' Experience on AI-supported Decision-Making in Government
    Janssen, Marijn
    Hartog, Martijn
    Matheus, Ricardo
    Yi Ding, Aaron
    Kuk, George
    SOCIAL SCIENCE COMPUTER REVIEW, 2022, 40 (02) : 478 - 493
  • [7] How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies
    Vereschak O.
    Bailly G.
    Caramiaux B.
    Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW2)
  • [8] In AI We Trust: Characteristics Influencing Assortment Planners' Perceptions of AI Based Recommendation Agents
    Bigras, Emilie
    Jutras, Marc-Antoine
    Senecal, Sylvain
    Leger, Pierre-Majorique
    Black, Chrystel
    Robitaille, Nicolas
    Grande, Karine
    Hudon, Christian
    HCI IN BUSINESS, GOVERNMENT, AND ORGANIZATIONS, 2018, 10923 : 3 - 16
  • [9] Do We Trust in AI? Role of Anthropomorphism and Intelligence
    Troshani, Indrit
    Rao Hill, Sally
    Sherman, Claire
    Arthur, Damien
    JOURNAL OF COMPUTER INFORMATION SYSTEMS, 2021, 61 (05) : 481 - 491
  • [10] Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption
    Bedue, Patrick
    Fritzsche, Albrecht
    JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT, 2022, 35 (02) : 530 - 549