How do citizens perceive the use of Artificial Intelligence in public sector decisions?

被引:17
作者
Haesevoets, Tessa [1 ,3 ]
Verschuere, Bram [1 ]
Van Severen, Ruben [2 ]
Roets, Arne [2 ]
机构
[1] Univ Ghent, Dept Publ Governance & Management, Ghent, Belgium
[2] Univ Ghent, Dept Dev Personal & Social Psychol, Ghent, Belgium
[3] Univ Ghent, Dept Publ Governance & Management, Apotheekstr 5, B-9000 Ghent, Belgium
关键词
Artificial Intelligence (AI); Public sector decisions; Hybrid decision -making; Decisional weight; Roles; Legitimacy; Decision type; PROCEDURAL JUSTICE; DEMOCRACY; PEOPLE;
D O I
10.1016/j.giq.2023.101906
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Artificial Intelligence (AI) has become increasingly prevalent in almost every aspect of our lives. At the same time, a debate about its applications, safety, and privacy is raging. In three studies, we explored how UK respondents perceive the usage of AI in various public sector decisions. Our results are fourfold. First, we found that people prefer AI to have considerably less decisional weight than various human decision-makers; those being: politicians, citizens, and (human) experts. Secondly, our findings revealed that people prefer AI to provide input and advice to these human decision-makers, rather than letting AI make decisions by itself. Thirdly, although AI is seen as contributing less to perceived legitimacy than these human decision-makers, similar to (human) experts, its contribution is seen more in terms of output legitimacy than in terms of input and throughput legitimacy. Finally, our results suggest that the involvement of AI is perceived more suitable for decisions that are low (instead of high) ideologically-charged. Overall, our findings thus show that people are rather skeptical towards using AI in the public domain, but this does not imply that they want to exclude AI entirely from the decision-making process.
引用
收藏
页数:14
相关论文
共 79 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
[Anonymous], 2018, Communication from the Commission to the European Parliament, the Council, the European economic and social committee and the committee of the regions on the digital education action plan
[3]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[4]   Beyond design and use: How scholars should study intelligent technologies [J].
Bailey, Diane E. ;
Barley, Stephen R. .
INFORMATION AND ORGANIZATION, 2020, 30 (02)
[5]   Ideals and Actions: Do Citizens' Patterns of Political Participation Correspond to their Conceptions of Democracy? [J].
Bengtsson, Asa ;
Christensen, Henrik .
GOVERNMENT AND OPPOSITION, 2016, 51 (02) :234-260
[7]   Technocratic attitudes: a citizens' perspective of expert decision-making [J].
Bertsou, Eri ;
Pastorella, Giulia .
WEST EUROPEAN POLITICS, 2017, 40 (02) :430-458
[8]   People are averse to machines making moral decisions [J].
Bigman, Yochanan E. ;
Gray, Kurt .
COGNITION, 2018, 181 :21-34
[9]   Accountable Artificial Intelligence: Holding Algorithms to Account [J].
Busuioc, Madalina .
PUBLIC ADMINISTRATION REVIEW, 2021, 81 (05) :825-836
[10]   Task-Dependent Algorithm Aversion [J].
Castelo, Noah ;
Bos, Maarten W. ;
Lehmann, Donald R. .
JOURNAL OF MARKETING RESEARCH, 2019, 56 (05) :809-825