On the relationship between mind perception and social support of chatbots

被引:9
作者
Lee, Inju [1 ]
Hahn, Sowon [1 ]
机构
[1] Seoul Natl Univ, Dept Psychol, Human Factors Psychol Lab, Seoul, South Korea
关键词
chatbot; social support; mind perception; human-like mind; user experience; human-computer interaction (HCI); DEHUMANIZATION; CONSEQUENCES; COMPUTERS; RESPONSES; MACHINES; IMPLICIT;
D O I
10.3389/fpsyg.2024.1282036
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
The social support provided by chatbots is typically designed to mimic the way humans support others. However, individuals have more conflicting attitudes toward chatbots providing emotional support (e.g., empathy and encouragement) compared to informational support (e.g., useful information and advice). This difference may be related to whether individuals associate a certain type of support with the realm of the human mind and whether they attribute human-like minds to chatbots. In the present study, we investigated whether perceiving human-like minds in chatbots affects users' acceptance of various support provided by the chatbot. In the experiment, the chatbot posed questions about participants' interpersonal stress events, prompting them to write down their stressful experiences. Depending on the experimental condition, the chatbot provided two kinds of social support: informational support or emotional support. Our results showed that when participants explicitly perceived a human-like mind in the chatbot, they considered the support to be more helpful in resolving stressful events. The relationship between implicit mind perception and perceived message effectiveness differed depending on the type of support. More specifically, if participants did not implicitly attribute a human-like mind to the chatbot, emotional support undermined the effectiveness of the message, whereas informational support did not. The present findings suggest that users' mind perception is essential for understanding the user experience of chatbot social support. Our findings imply that informational support can be trusted when building social support chatbots. In contrast, the effectiveness of emotional support depends on the users implicitly giving the chatbot a human-like mind.
引用
收藏
页数:10
相关论文
共 53 条
[1]   AI-based chatbots in customer service and their effects on user compliance [J].
Adam, Martin ;
Wessel, Michael ;
Benlian, Alexander .
ELECTRONIC MARKETS, 2021, 31 (02) :427-445
[2]   Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions [J].
Araujo, Theo .
COMPUTERS IN HUMAN BEHAVIOR, 2018, 85 :183-189
[3]   Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans [J].
Banks, Jaime .
TECHNOLOGY, MIND, AND BEHAVIOR, 2021, 1 (02)
[4]   Theory of Mind in Social Robots: Replication of Five Established Human Tests [J].
Banks, Jaime .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2020, 12 (02) :403-414
[5]  
Bickmore T.W., 2004, CHI 2004 HUM FACT CO, P1489, DOI [DOI 10.1145/985921.986097, 10.1145/985921.986097]
[6]   When the Social Becomes Non-Human: Young People's Perception of Social Support in Chatbots [J].
Brandtzaeg, Petter Bae ;
Skjuve, Marita ;
Dysthe, Kim Kristoffer ;
Folstad, Asbjorn .
CHI '21: PROCEEDINGS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2021,
[7]   Why People Use Chatbots [J].
Brandtzaeg, Petter Bae ;
Folstad, Asbjorn .
INTERNET SCIENCE, 2017, 10673 :377-392
[8]   Computers that are care: investigating the effects of orientation of emotion exhibited by an embodied computer agent [J].
Brave, S ;
Nass, C ;
Hutchinson, K .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2005, 62 (02) :161-178
[9]   SOCIAL SUPPORT AS A MODERATOR OF LIFE STRESS [J].
COBB, S .
PSYCHOSOMATIC MEDICINE, 1976, 38 (05) :300-314
[10]   STRESS, SOCIAL SUPPORT, AND THE BUFFERING HYPOTHESIS [J].
COHEN, S ;
WILLS, TA .
PSYCHOLOGICAL BULLETIN, 1985, 98 (02) :310-357