In AI we trust? Citizen perceptions of AI in government decision making

被引:41
作者
Ingrams, Alex [1 ]
Kaufmann, Wesley [2 ]
Jacobs, Daan [2 ]
机构
[1] Leiden Univ, Inst Publ Adm, Leiden, Netherlands
[2] Tilburg Univ, Dept Publ Law & Governance, Tilburg, Netherlands
来源
POLICY AND INTERNET | 2022年 / 14卷 / 02期
关键词
artificial intelligence; decision making; trust; red tape; ARTIFICIAL-INTELLIGENCE; RED TAPE; BIG DATA; TECHNOLOGY; DISCRETION; MANAGEMENT; SERVICES; BENEFITS; TRUSTWORTHINESS; DETERMINANTS;
D O I
10.1002/poi3.276
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Using a survey experiment on the topic of tax auditing, we investigate artificial intelligence (AI) use in government decision making through the lenses of citizen red tape and trust. We find that individuals consider an AI-led decision to be lower in red tape and trustworthiness than a decision by a human. We also find that highly complex tasks produce decisions with higher levels of perceived red tape, but that this effect does not vary according to whether the task is AI- or human-led. We argue that researchers and practitioners give more attention to the balance of instrumental and value-based qualities in the design and implementation of AI applications.
引用
收藏
页码:390 / 409
页数:20
相关论文
共 50 条
[21]   Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions about Automated Decision Making Across Europe [J].
Araujo, Theo ;
Brosius, Anna ;
Goldberg, Andreas C. ;
Moeller, Judith ;
de Vreese, Claes H. .
INTERNATIONAL JOURNAL OF COMMUNICATION, 2023, 17 :6222-6249
[22]   Requirements for trustworthy AI-enabled automated decision-making in the public sector: A systematic review [J].
Agbabiaka, Olusegun ;
Ojo, Adegboyega ;
Connolly, Niall .
TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2025, 215
[23]   Transparency and the Black Box Problem: Why We Do Not Trust AI [J].
von Eschenbach W.J. .
Philosophy & Technology, 2021, 34 (4) :1607-1622
[24]   Can We Learn From Trust in Simulation to Gain Trust in AI? [J].
Tolk, Andreas .
ERGONOMICS IN DESIGN, 2024,
[25]   The Next Frontier: AI We Can Really Trust [J].
Holzinger, Andreas .
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 :427-440
[26]   Do we want AI judges? The acceptance of AI judges' judicial decision-making on moral foundations [J].
Kim, Taenyun ;
Peng, Wei .
AI & SOCIETY, 2024, :3683-3696
[27]   Is this AI sexist? The effects of a biased AI's anthropomorphic appearance and explainability on users' bias perceptions and trust [J].
Tsung-Yu, Hou ;
Yu-Chia, Tseng ;
Wen , Yuan Chien .
INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2024, 76
[28]   In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance [J].
Hasija, Abhinav ;
Esper, Terry L. .
JOURNAL OF BUSINESS LOGISTICS, 2022, 43 (03) :388-412
[29]   A Quantum Probability Approach to Improving Human-AI Decision Making [J].
Humr, Scott ;
Canan, Mustafa ;
Demir, Mustafa .
ENTROPY, 2025, 27 (02)
[30]   Trust in AI: why we should be designing for APPROPRIATE reliance [J].
Benda, Natalie C. ;
Novak, Laurie L. ;
Reale, Carrie ;
Ancker, Jessica S. .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2021, 29 (01) :207-212