The emotional impact of generative AI: negative emotions and perception of threat

被引:13
作者
Alessandro, Gabbiadini [1 ,2 ]
Dimitri, Ognibene [1 ]
Cristina, Baldissarri [1 ]
Anna, Manfredi [1 ]
机构
[1] Univ Milano Bicocca, Mind & Behav Technol Ctr, Dept Psychol, Milan, Italy
[2] Univ Milano Bicocca, Dept Psychol, Piazza Ateneo Nuovo 1, I-20126 Milan, Italy
关键词
Generative artificial intelligence; ChatGPT; Voicify; negative emotions; symbolic threat; realistic threat; UNCANNY VALLEY; INTERGROUP; PREJUDICE; EXPECTANCIES; ACCEPTANCE; MODEL;
D O I
10.1080/0144929X.2024.2333933
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Generative Artificial Intelligence (AI) is a rapidly expanding field that aims to develop machines capable of performing tasks that were previously considered unique to humans, such as learning, reasoning, problem-solving, and decision-making. The recent release of several tools based on AI (e.g. ChatGPT) has sparked debates on the potential of this technology and garnered widespread attention in the mainstream media.Using a socio-psychological approach, in three studies (total N = 410), we demonstrate that when faced with Generative AI's ability to reproduce the complexity of human cognitive capabilities, participants reported significantly higher negative emotions than those in the control group. In turn, negative emotions elicited by a specific type of AI (e.g. generative AI) were associated to the perception of threat extended to AI technologies as a whole, understood as a threat to various aspects of human life, including jobs, resources, identity, uniqueness, and value.Our findings emphasise the importance of considering emotional and societal impacts when developing and deploying advanced AI technologies and implementing responsible guidelines to minimise adverse effects. As AI technology advances, addressing public concerns and regulating its usage is crucial for the benefit of society. To achieve this goal, collaboration between experts, policymakers, and the public is necessary.
引用
收藏
页码:676 / 693
页数:18
相关论文
共 85 条
[1]   Obstacles to intergroup contact: When outgroup partner's anxiety meets perceived ethnic discrimination [J].
Andrighetto, Luca ;
Durante, Federica ;
Lugani, Federica ;
Volpato, Chiara ;
Mirisola, Alberto .
BRITISH JOURNAL OF SOCIAL PSYCHOLOGY, 2013, 52 (04) :781-792
[2]  
Branscombe N. R., 1999, Social identity: Contexts, commitment, content, P35, DOI DOI 10.1037//00"E2%80%933514.77.1.135
[3]  
Brown TB, 2020, ADV NEUR IN, V33
[4]   A systematic review of algorithm aversion in augmented decision making [J].
Burton, Jason W. ;
Stein, Mari-Klara ;
Jensen, Tina Blegind .
JOURNAL OF BEHAVIORAL DECISION MAKING, 2020, 33 (02) :220-239
[5]   Rams, hounds and white boxes: Investigating human-AI collaboration protocols in medical diagnosis [J].
Cabitza, Federico ;
Campagner, Andrea ;
Ronzio, Luca ;
Cameli, Matteo ;
Mandoli, Giulia Elena ;
Pastore, Maria Concetta ;
Sconfienza, Luca Maria ;
Folgado, Duarte ;
Barandas, Marilia ;
Gamboa, Hugo .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 138
[6]   Unintended Consequences of Machine Learning in Medicine [J].
Cabitza, Federico ;
Rasoini, Raffaele ;
Gensini, Gian Franco .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2017, 318 (06) :517-518
[7]   Social Structure Shapes Cultural Stereotypes and Emotions: A Causal Test of the Stereotype Content Model [J].
Caprariello, Peter A. ;
Cuddy, Amy J. C. ;
Fiske, Susan T. .
GROUP PROCESSES & INTERGROUP RELATIONS, 2009, 12 (02) :147-155
[8]   Artificial Intelligence and the 'Good Society': the US, EU, and UK approach [J].
Cath, Corinne ;
Wachter, Sandra ;
Mittelstadt, Brent ;
Taddeo, Mariarosaria ;
Floridi, Luciano .
SCIENCE AND ENGINEERING ETHICS, 2018, 24 (02) :505-528
[9]   Compensating for the loss of human distinctiveness: The use of social creativity under Human-Machine comparisons [J].
Cha, Young-Jae ;
Baek, Sojung ;
Ahn, Grace ;
Lee, Hyoungsuk ;
Lee, Boyun ;
Shin, Ji-eun ;
Jang, Dayk .
COMPUTERS IN HUMAN BEHAVIOR, 2020, 103 :80-90
[10]  
Child R., 2019, CoRR abs/1904.10509