Suspicious Minds: the Problem of Trust and Conversational Agents

被引:9
作者
Ivarsson, Jonas [1 ]
Lindwall, Oskar [1 ]
机构
[1] Univ Gothenburg, Dept Appl IT, Gothenburg, Sweden
来源
COMPUTER SUPPORTED COOPERATIVE WORK-THE JOURNAL OF COLLABORATIVE COMPUTING AND WORK PRACTICES | 2023年 / 32卷 / 03期
关键词
Conversation; Human-computer interaction; Natural language processing; Trust; Understanding;
D O I
10.1007/s10606-023-09465-8
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the 'Turing test' has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human-computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human-human interactions.
引用
收藏
页码:545 / 571
页数:27
相关论文
共 47 条
  • [1] Designing and Managing Human-Al Interactions
    Abedin, Babak
    Meske, Christian
    Junglas, Iris
    Rabhi, Fethi
    Motahari-Nezhad, Hamid R.
    [J]. INFORMATION SYSTEMS FRONTIERS, 2022, 24 (03) : 691 - 697
  • [2] Brown K., 2021, The New York Times
  • [3] Button G., 2022, ETHNOMETHODOLOGY CON, DOI DOI 10.4324/9781003220794
  • [4] Button G., 1995, COMPUTERS MINDS COND
  • [5] One voice fits all? Social implications and research challenges of designing voices for smart devices
    Cambre J.
    Kulkarni C.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2019, 3 (CSCW)
  • [6] Transparent Human-Agent Communications
    Chen, Jessie Y. C.
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2022, 38 (18-20) : 1737 - 1738
  • [7] How Do Illiterate People Interact with an Intelligent Voice Assistant?
    da Silva, Thiago H. O.
    Furtado, Vasco
    Furtado, Elizabeth
    Mendes, Marilia
    Almeida, Virgilio
    Sales, Lanna
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (03) : 584 - 602
  • [8] Evidence of Human-Level Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study
    Darcy, Alison
    Daniels, Jade
    Salinger, David
    Wicks, Paul
    Robinson, Athena
    [J]. JMIR FORMATIVE RESEARCH, 2021, 5 (05)
  • [9] Progressivity for Voice Interface Design
    Fischer, Joel E.
    Reeves, Stuart
    Porcheron, Martin
    Sikveland, Rein Ove
    [J]. PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON CONVERSATIONAL USER INTERFACES (CUI 2019), 2019,
  • [10] Garfinkel H., 1963, MOTIVATION SOCIAL IN, P187