Reframing Deception for Human-Centered AI

被引:0
|
作者
Umbrello, Steven [1 ]
Natale, Simone [2 ]
机构
[1] Univ Turin, Dept Philosophy & Educ Sci, Via Giuseppe Verdi, 8, I-10124 Turin, Italy
[2] Univ Turin, Dept Humanities, Via Giuseppe Verdi, 8, Turin, Italy
关键词
Human centered AI; Artificial intelligence; Deception; Banal deception; Ethics by design; Applied ethics; ALEXA; BEHAVIOR;
D O I
10.1007/s12369-024-01184-4
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The philosophical, legal, and HCI literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human-Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, has shown that deception is a more structural component of AI than it is usually acknowledged. AI systems that enter in communication with users, in fact, forcefully invite reactions such as attributions of gender, personality and empathy, even in the absence of malicious intent and often also with potentially positive or functional impacts on the interaction. This paper aims to operationalise the Human-Centred AI (HCAI) framework to develop the implications of this body of work for practical approaches to AI ethics in HCI and design. In order to achieve this goal, we take up the analytical distinction between "banal" and "strong" deception, originally proposed in theoretical and historical scholarship on AI (Natale in Deceitful media: artificial intelligence and social life after the turing test, Oxford University Press, New York, 2021), as a starting point to develop ethical reflections that will empower designers and developers with practical ways to solve the problems raised by the complex relationship between deception and communicative AI. The paper considers how HCAI can be applied to conversational AI (CAI) systems in order to design them to develop banal deception for social good and, at the same time, to avoid its potential risks.
引用
收藏
页码:2223 / 2241
页数:19
相关论文
共 50 条
  • [1] Human-Centered AI
    Marble, Jena
    DESIGN AND CULTURE, 2024,
  • [2] Human-Centered AI
    Taylor, Angelique
    ISSUES IN SCIENCE AND TECHNOLOGY, 2022, 38 (04) : 93 - 95
  • [3] Human-Centered AI
    Pinhanez, Claudio
    Michahelles, Florian
    Schmidt, Albrecht
    IEEE PERVASIVE COMPUTING, 2023, 22 (01) : 7 - 8
  • [4] Human-Centered AI
    Killoran, Jay
    Park, Andrew
    BUSINESS ETHICS QUARTERLY, 2024, 34 (03) : 517 - 521
  • [5] Human-Centered AI
    Shneiderman, Ben
    ISSUES IN SCIENCE AND TECHNOLOGY, 2021, 37 (02) : 56 - 61
  • [6] Human-centered AI and robotics
    Stephane Doncieux
    Raja Chatila
    Sirko Straube
    Frank Kirchner
    AI Perspectives, 4 (1):
  • [7] What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
    Capel, Tara
    Brereton, Margot
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,
  • [8] Designing Human-Centered AI Products
    Olson, Kristen
    Moussalem, Maysam
    Dang, Di
    Fisher, Kristie J.
    Holbrook, Jess
    Salois, Rebecca
    ARTIFICIAL INTELLIGENCE IN EDUCATION, AIED 2019, PT II, 2019, 11626 : 428 - 428
  • [9] Human-Centered AI: A New Synthesis
    Shneiderman, Ben
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT I, 2021, 12932 : 3 - 8
  • [10] Human-Centered AI: A Framework for Green and Sustainable AI
    Shin, Donghee
    Shin, Emily Y.
    COMPUTER, 2023, 56 (06) : 16 - 25