Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers' Experience on AI-supported Decision-Making in Government

被引:86
作者
Janssen, Marijn [1 ]
Hartog, Martijn [2 ]
Matheus, Ricardo [3 ]
Yi Ding, Aaron [4 ]
Kuk, George [5 ]
机构
[1] Delft Univ Technol, ICT & Governance Informat & Commun Technol Sect, Technol Policy & Management Fac, Delft, Netherlands
[2] Delft Univ Technol, Fac Technol Policy & Management, Delft, Netherlands
[3] Delft Univ Technol, Field Open Govt Data & Infrastruct, Informat & Commun Technol Res Grp, Technol Policy & Management Fac, Delft, Netherlands
[4] Delft Univ Technol, Jaffalaan 5, NL-2628 BX Delft, Zuid Holland, Netherlands
[5] Nottingham Trent Univ, Nottingham, England
关键词
AI; artificial intelligence; decision making; e-government; algorithmic governance; transparency; accountability; XAI; experiment; data-driven government; BIG DATA; ARTIFICIAL-INTELLIGENCE; CHALLENGES; POLICY; IMPACT; MODEL;
D O I
10.1177/0894439320980118
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision makers are key factors in increasing accountability and transparency.
引用
收藏
页码:478 / 493
页数:16
相关论文
共 42 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Alpaydin E, 2014, ADAPT COMPUT MACH LE, P1
[3]  
[Anonymous], 2002, Business rules applied: business better systems using the business rules approach
[4]   w The Missing Variable in Big Data for Social Sciences: The Decision-Maker [J].
Arnaboldi, Michela .
SUSTAINABILITY, 2018, 10 (10)
[5]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[6]  
Brauneis R., 2018, Yale Journal of Law and Technology, V20, P103
[7]   Profound change is coming, but roles for humans remain [J].
Brynjolfsson, Erik ;
Mitchell, Tom .
SCIENCE, 2017, 358 (6370) :1530-1534
[8]   How the machine 'thinks': Understanding opacity in machine learning algorithms [J].
Burrell, Jenna .
BIG DATA & SOCIETY, 2016, 3 (01) :1-12
[9]   When Google got flu wrong [J].
Butler, Declan .
NATURE, 2013, 494 (7436) :155-156
[10]  
Churchman C.W., 1967, Manag. Sci., V14, P141