Do We Trust Artificially Intelligent Assistants at Work? An Experimental Study

被引:0
作者
Cvetkovic, Anica [1 ]
Savela, Nina [1 ,2 ]
Latikka, Rita [1 ]
Oksanen, Atte [1 ]
机构
[1] Tampere Univ, Fac Social Sci, Kalevantie 4, Tampere 33100, Finland
[2] LUT Univ, Sch Engn Sci, Lappeenranta, Finland
关键词
artificial intelligence; assistants; control; survey experiment; trust; work; USER ACCEPTANCE; AUTOMATION; PERSONALITY; ALGORITHMS; HAPPINESS; AUTONOMY; IF;
D O I
10.1155/2024/1602237
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
The fourth industrial revolution is bringing artificial intelligence (AI) into various workplaces, and many businesses worldwide are already capitalizing on AI assistants. Trust is essential for the successful integration of AI into organizations. We hypothesized that people have higher trust in human assistants than AI assistants and that people trust AI assistants more if they have more control over their activities. To test our hypotheses, we utilized a survey experiment with 828 participants from Finland. Results showed that participants would rather entrust their schedule to a person than to an AI assistant. Having control increased trust in both human and AI assistants. The results of this study imply that people in Finland still have higher trust in traditional workplaces where people, rather than smart machines, perform assisting work. The findings are of relevance for designing trustworthy AI assistants, and they should be considered when integrating AI technology into organizations.
引用
收藏
页数:12
相关论文
共 81 条
[1]   Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust [J].
Abbass, Hussein A. .
COGNITIVE COMPUTATION, 2019, 11 (02) :159-171
[2]   Trusted Autonomy and Cognitive Cyber Symbiosis: Open Challenges [J].
Abbass, Hussein A. ;
Petraki, Eleni ;
Merrick, Kathryn ;
Harvey, John ;
Barlow, Michael .
COGNITIVE COMPUTATION, 2016, 8 (03) :385-408
[3]   Measuring happiness with a single-item scale [J].
Abdel-Khalek, AM .
SOCIAL BEHAVIOR AND PERSONALITY, 2006, 34 (02) :139-149
[4]   The importance of the assurance that "humans are still in the decision loop" for public trust in artificial intelligence: Evidence from an online experiment [J].
Aoki, Naomi .
COMPUTERS IN HUMAN BEHAVIOR, 2021, 114
[5]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[6]   AI Decision Making with Dignity? Contrasting Workers' Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context [J].
Bankins, Sarah ;
Formosa, Paul ;
Griep, Yannick ;
Richards, Deborah .
INFORMATION SYSTEMS FRONTIERS, 2022, 24 (03) :857-875
[7]   On the ability of virtual agents to decrease cognitive load: an experimental study [J].
Brachten, Florian ;
Bruenker, Felix ;
Frick, Nicholas R. J. ;
Ross, Bjoern ;
Stieglitz, Stefan .
INFORMATION SYSTEMS AND E-BUSINESS MANAGEMENT, 2020, 18 (02) :187-207
[8]   Trust and control: A dialectic link [J].
Castelfranchi, C ;
Falcone, R .
APPLIED ARTIFICIAL INTELLIGENCE, 2000, 14 (08) :799-823
[9]   Challenge or hindrance? How and when organizational artificial intelligence adoption influences employee job crafting [J].
Cheng, Bao ;
Lin, Hongxia ;
Kong, Yurou .
JOURNAL OF BUSINESS RESEARCH, 2023, 164
[10]   Trust, Shared Understanding and Locus of Control in Mixed-Initiative Robotic Systems [J].
Chiou, Manolis ;
McCabe, Faye ;
Grigoriou, Markella ;
Stolkin, Rustam .
2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, :684-691