When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design

被引:1
作者
Maeda, Takuya [1 ]
Quan-Haase, Anabel [1 ]
机构
[1] Western Univ, London, ON, Canada
来源
PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024 | 2024年
关键词
trust; chatbots; anthropomorphism; human-AI interactions; ethics; parasociality; design; COMMUNICATION;
D O I
10.1145/3630106.3658956
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the continuous improvement of large language models (LLMs), chatbots can produce coherent and continuous word sequences that mirror natural human language. While the use of natural language and human-like conversation styles enables the use of chatbots within a range of everyday settings, these usability-enhancing features can also have unintended consequences, such as making fallible information seem trustworthy by emphasizing friendliness and closeness. This can have serious implications for information retrieval tasks performed with chatbots. In this paper, we provide an overview of the literature on parasociality, social affordance, and trust to bridge these concepts within human-AI interactions. We critically examine how chatbot "roleplaying" and user role projection co-produce a pseudo-interactive, technologically-mediated space with imbalanced dynamics between users and chatbots. Based on the review of the literature, we develop a conceptual framework of parasociality in chatbots that describes interactions between humans and anthropomorphized chatbots. We dissect how chatbots use personal pronouns, conversational conventions, affirmations, and similar strategies to position the chatbots as users' companions or assistants, and how these tactics induce trust-forming behaviors in users. Finally, based on the conceptual framework, we outline a set of ethical concerns that emerge from parasociality, including illusions of reciprocal engagement, task misalignment, and leaks of sensitive information. This paper argues that these possible consequences arise from a positive feedback cycle wherein anthropomorphized chatbot features encourage users to fill in the context around predictive outcomes.
引用
收藏
页码:1068 / 1077
页数:10
相关论文
共 77 条
  • [1] Abercrombie G, 2021, GEBNLP 2021: THE 3RD WORKSHOP ON GENDER BIAS IN NATURAL LANGUAGE PROCESSING, P24
  • [2] Abercrombie Gavin, 2023, P 2023 C EMPIRICAL M, P4776, DOI [DOI 10.18653/V1/2023.EMNLP, 10.18653/v1/2023.emnlp-main.290, 10.18653/v1/2023]
  • [3] Persistent Anti-Muslim Bias in Large Language Models
    Abid, Abubakar
    Farooqi, Maheen
    Zou, James
    [J]. AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 298 - 306
  • [4] Ashoori M., 2019, PREPRINT
  • [5] Exposing convergence: YouTube, fan labour, and anxiety of cultural production in Lonelygirl15
    Bakioglu, Burcu S.
    [J]. CONVERGENCE-THE INTERNATIONAL JOURNAL OF RESEARCH INTO NEW MEDIA TECHNOLOGIES, 2018, 24 (02): : 184 - 204
  • [6] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Bender, Emily M.
    Gebru, Timnit
    McMillan-Major, Angelina
    Shmitchell, Shmargaret
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 610 - 623
  • [7] Bender Emily M., 2020, P 58 ANN M ASS COMP, P5185, DOI DOI 10.18653/V1/2020.ACL-MAIN.463
  • [8] Brown TB, 2020, ADV NEUR IN, V33
  • [9] How the machine 'thinks': Understanding opacity in machine learning algorithms
    Burrell, Jenna
    [J]. BIG DATA & SOCIETY, 2016, 3 (01): : 1 - 12
  • [10] Students' voices on generative AI: perceptions, benefits, and challenges in higher education
    Chan, Cecilia Ka Yuk
    Hu, Wenjie
    [J]. INTERNATIONAL JOURNAL OF EDUCATIONAL TECHNOLOGY IN HIGHER EDUCATION, 2023, 20 (01)