Should My Agent Lie for Me? Public Moral Perspectives on Deceptive AI

被引:0
作者
Sarkadi, Stefan [1 ]
Mei, Peidong [2 ]
Awad, Edmond [2 ]
机构
[1] Kings Coll London, London, England
[2] Univ Exeter, Exeter, England
来源
AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS. BEST AND VISIONARY PAPERS, AAMAS 2023 WORKSHOPS | 2024年 / 14456卷
关键词
Deceptive AI; AI Ethics; User Study; ETHICS; MIND;
D O I
10.1007/978-3-031-56255-6_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial Intelligence (AI) advancements might deliver autonomous agents capable of human-like deception. Such capabilities have mostly been negatively perceived in HCI design, as they can have serious ethical implications. However, AI deception might be beneficial in some situations. Previous research has shown that machines designed with some level of dishonesty can elicit increased cooperation with humans. This raises several questions: Are there future-of-work situations where deception by machines can be an acceptable behaviour? Is this different from human deceptive behaviour? How does AI deception influence human trust and the adoption of deceptive machines? In this paper, we describe the results of a user study published in the proceedings of AAMAS 2023. The study answered these questions by considering different contexts and job roles. Here, we contextualise the results of the study by proposing ways forward to achieve a framework for developing Deceptive AI responsibly. We provide insights and lessons that will be crucial in understanding what factors shape the social attitudes and adoption of AI systems that may be required to exhibit dishonest behaviour as part of their jobs.
引用
收藏
页码:151 / 179
页数:29
相关论文
共 87 条
[1]  
Adar Eytan, 2013, P C HUM FACT COMP SY, P1863, DOI [10.1145/2470654.2466246, DOI 10.1145/2470654.2466246]
[2]   Feature Review Computational ethics [J].
Awad, Edmond ;
Levine, Sydney ;
Anderson, Michael ;
Anderson, Susan Leigh ;
Conitzer, Vincent ;
Crockett, M. J. ;
Everett, Jim A. C. ;
Evgeniou, Theodoros ;
Gopnik, Alison ;
Jamison, Julian C. ;
Kim, Tae Wan ;
Liao, S. Matthew ;
Meyer, Michelle N. ;
Mikhail, John ;
Opoku-Agyemang, Kweku ;
Borg, Jana Schaich ;
Schroeder, Juliana ;
Sinnott-Armstrong, Walter ;
Slavkovik, Marija ;
Tenenbaum, Josh B. .
TRENDS IN COGNITIVE SCIENCES, 2022, 26 (05) :388-405
[3]   Universals and variations in moral decisions made in 42 countries by 70,000 participants [J].
Awad, Edmond ;
Dsouza, Sohan ;
Shariff, Azim ;
Rahwan, Iyad ;
Bonnefon, Jean-Francois .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (05) :2332-2337
[4]   The Moral Machine experiment [J].
Awad, Edmond ;
Dsouza, Sohan ;
Kim, Richard ;
Schulz, Jonathan ;
Henrich, Joseph ;
Shariff, Azim ;
Bonnefon, Jean-Francois ;
Rahwan, Iyad .
NATURE, 2018, 563 (7729) :59-+
[5]  
BERNDT TJ, 1975, CHILD DEV, V46, P904
[6]   Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being [J].
Borenstein, Jason ;
Arkin, Ron .
SCIENCE AND ENGINEERING ETHICS, 2016, 22 (01) :31-46
[7]   Religion and attitudes to corporate social responsibility in a large cross-country sample [J].
Brammer, S. ;
Williams, Geoffrey ;
Zinkin, John .
JOURNAL OF BUSINESS ETHICS, 2007, 71 (03) :229-243
[8]   Behavioural science is unlikely to change the world without a heterogeneity revolution [J].
Bryan, Christopher J. ;
Tipton, Elizabeth ;
Yeager, David S. .
NATURE HUMAN BEHAVIOUR, 2021, 5 (08) :980-989
[9]  
Camden C. L., 1984, Western Journal of Speech Communication, V48, P309, DOI [DOI 10.1080/10570318409374167, https://doi.org/10.1080/10570318409374167, 10.1080/10570318409374167]
[10]  
Castelfranchi C, 2002, INT J ELECTRON COMM, V6, P55