Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

被引:32
作者
Aroyo A.M. [1 ]
De Bruyne J. [2 ]
Dheu O. [2 ]
Fosch-Villaronga E. [3 ]
Gudkov A. [4 ]
Hoch H. [5 ]
Jones S. [6 ]
Lutz C. [7 ]
Sætra H. [8 ]
Solberg M. [9 ]
Tamò-Larrieux A. [5 ]
机构
[1] Sirrl, University of Waterloo, Waterloo
[2] Ku Leuven Centre for It and Ip Law (CiTiP), Ku Leuven, Leuven
[3] ELaw Center for Law and Digital Technologies, Leiden University, Leiden
[4] Faculty of Law: School for Theory of Law and Cross-sectoral Legal Disciplines, National Research University Higher School of Economics, Moscow
[5] FAA-HSG, University of St. Gallen, St. Gallen
[6] Department of Communication, University of Illinois Chicago, Chicago
[7] Nordic Centre for Internet and Society, Bi Norwegian Business School, Nydalsveien 37, Oslo
[8] Department of Computer Science and Communication, Østfold University College, Østfold
[9] Department of Health Sciences in Ålesund, Norwegian University of Science and Technology, Trondheim
来源
Paladyn | 2021年 / 12卷 / 01期
基金
欧盟地平线“2020”;
关键词
anthropomorphization liability; deception; education; overtrust; robots; social robots; trust;
D O I
10.1515/pjbr-2021-0029
中图分类号
学科分类号
摘要
There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others' trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human-robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI. © 2021 Alexander M. Aroyo et al., published by De Gruyter.
引用
收藏
页码:423 / 436
页数:13
相关论文
共 103 条
  • [1] on artificial intelligence - A European approach to excellence and trust,, European Commission
  • [2] Lee J.D., See K.A., Trust in automation: Designing for appropriate reliance, Hum. Factors J. Hum. Factors Ergonom Soc., 46, NO. 1, pp. 50-80, (2004)
  • [3] Meyerson D., Weick K.E., Kramer R.M., Tyler T., Swift trust and temporary groups, Trust Organizations: Frontiers of Theory and Research, pp. 166-195, (1996)
  • [4] Hancock P.A., Billings D.R., Schaefer K.E., Chen J.Y., De Visser E.J., Parasuraman R., A meta-analysis of factors affecting trust in human-robot interaction, Hum. Factors, 53, NO. 5, pp. 517-527, (2011)
  • [5] Saetra H.S., Social robot deception and the culture of trust, Paladyn J. Behav. Robot., 12, NO. 1, pp. 276-286, (2021)
  • [6] Levine E.E., Schweitzer M.E., Prosocial lies: When deception breeds trust, Organ. Behav. Hum. Decis. Process., 126, pp. 88-106, (2015)
  • [7] Robinson S.C., Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI), Technol. Soc., 63, (2020)
  • [8] Felzmann H., Fosch-Villaronga E., Lutz C., Tamo-Larrieux A., Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns, Big Data Soc., 6, NO. 1, pp. 1-14, (2019)
  • [9] Parasuraman R., Riley V., Humans and automation: Use, misuse, disuse, abuse, Hum. Factors, 39, NO. 2, pp. 230-253, (1997)
  • [10] "ethics Guidelines for Trustworthy AI," European Commission, (2020)