The impact of AI errors in a human-in-the-loop process

被引:0
作者
Ujué Agudo
Karlos G. Liberal
Miren Arrese
Helena Matute
机构
[1] Bikolabs/Biko,Departamento de Psicología
[2] Universidad de Deusto,undefined
来源
Cognitive Research: Principles and Implications | / 9卷
关键词
Human–computer interaction; Automation bias; AI; Decision-making; Human-in-the-loop; Compliance; Artificial intelligence;
D O I
暂无
中图分类号
学科分类号
摘要
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
引用
收藏
相关论文
共 55 条
[1]  
Agudo U(2021)The influence of algorithms on political and dating decisions PLoS ONE 16 e0249454-13
[2]  
Matute H(2022)Human-AI interactions in public sector decision-making: ‘Automation Bias’ and ‘Selective Adherence’ to algorithmic advice Journal of Public Administration Research and Theory 35 1-332
[3]  
Alon-Barkat S(2020)In AI we trust? Perceptions about automated decision-making by artificial intelligence AI & Society 11 319-21
[4]  
Busuioc M(2021)Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR International Data Privacy Law 5 1-318
[5]  
Araujo T(2021)To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making Proceedings of the ACM on Human-Computer Interaction 127 107018-820
[6]  
Helberger N(2022)Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice Computers in Human Behavior 17 311-7
[7]  
Kruikemeier S(2006)The anchoring-and-adjustment heuristic: Why the adjustments are insufficient Psychological Science 25 808-431
[8]  
de Vreese CH(2019)Human decision making with machine assistance: An experiment on bailing and jailing SSRN Electronic Journal 25 1-479
[9]  
de Vreese CH(2016)Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review BMJ Quality & Safety 24 423-31
[10]  
Binns R(2016)Eficacia predictiva de la valoración policial del riesgo de la violencia de género Psychosocial Intervention 30 469-453