Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning

被引:28
作者
Lanz, Lukas [1 ]
Briker, Roman [2 ]
Gerpott, Fabiola H. [1 ,3 ]
机构
[1] WHU Otto Beisheim Sch Management, Dusseldorf, Germany
[2] Maastricht Univ, Dept Org Strategy & Entrepreneurship, Maastricht, Netherlands
[3] Vrije Univ Amsterdam, Amsterdam, Netherlands
关键词
Unethical leadership; Artificial intelligence; AI leadership; Perceived mind; IMPLICIT LEADERSHIP THEORY; ARTIFICIAL-INTELLIGENCE; PEOPLE; BIAS; MIND; RECOMMENDATIONS; RECIPROCITY; AUTOMATION; ALGORITHMS; PSYCHOLOGY;
D O I
10.1007/s10551-023-05393-1
中图分类号
F [经济];
学科分类号
02 ;
摘要
The role of artificial intelligence (AI) in organizations has fundamentally changed from performing routine tasks to supervising human employees. While prior studies focused on normative perceptions of such AI supervisors, employees' behavioral reactions towards them remained largely unexplored. We draw from theories on AI aversion and appreciation to tackle the ambiguity within this field and investigate if and why employees might adhere to unethical instructions either from a human or an AI supervisor. In addition, we identify employee characteristics affecting this relationship. To inform this debate, we conducted four experiments (total N = 1701) and used two state-of-the-art machine learning algorithms (causal forest and transformers). We consistently find that employees adhere less to unethical instructions from an AI than a human supervisor. Further, individual characteristics such as the tendency to comply without dissent or age constitute important boundary conditions. In addition, Study 1 identified that the perceived mind of the supervisors serves as an explanatory mechanism. We generate further insights on this mediator via experimental manipulations in two pre-registered studies by manipulating mind between two AI (Study 2) and two human supervisors (Study 3). In (pre-registered) Study 4, we replicate the resistance to unethical instructions from AI supervisors in an incentivized experimental setting. Our research generates insights into the 'black box' of human behavior toward AI supervisors, particularly in the moral domain, and showcases how organizational researchers can use machine learning methods as powerful tools to complement experimental research for the generation of more fine-grained insights.
引用
收藏
页码:625 / 646
页数:22
相关论文
共 84 条
[1]  
Adey O., 2021, GETTOTEXT 0127
[2]   MTurk Research: Review and Recommendations [J].
Aguinis, Herman ;
Villamor, Isabel ;
Ramani, Ravi S. .
JOURNAL OF MANAGEMENT, 2021, 47 (04) :823-837
[3]   Best-Practice Recommendations for Defining, Identifying, and Handling Outliers [J].
Aguinis, Herman ;
Gottfredson, Ryan K. ;
Joo, Harry .
ORGANIZATIONAL RESEARCH METHODS, 2013, 16 (02) :270-301
[4]  
Athey Susan., 2019, OBSERVATIONAL STUDIE, V5, P21, DOI DOI 10.1353/OBS.2019.0001
[5]   Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status [J].
Bastian, Brock ;
Laham, Simon M. ;
Wilson, Sam ;
Haslam, Nick ;
Koval, Peter .
BRITISH JOURNAL OF SOCIAL PSYCHOLOGY, 2011, 50 (03) :469-483
[6]   Iterative random forests to discover predictive and stable high-order interactions [J].
Basu, Sumanta ;
Kumbier, Karl ;
Brown, James B. ;
Yu, Bin .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (08) :1943-1948
[7]   Predicting leadership perception with large-scale natural language data [J].
Bhatia, Sudeep ;
Olivola, Christopher Y. ;
Bhatia, Nazli ;
Ameen, Amnah .
LEADERSHIP QUARTERLY, 2022, 33 (05)
[8]   Distributed semantic representations for modeling human judgment [J].
Bhatia, Sudeep ;
Richie, Russell ;
Zou, Wanling .
CURRENT OPINION IN BEHAVIORAL SCIENCES, 2019, 29 :31-36
[9]   Algorithmic Discrimination Causes Less Moral Outrage Than Human Discrimination [J].
Bigman, Yochanan E. ;
Wilson, Desman ;
Arnestad, Mads N. ;
Waytz, Adam ;
Gray, Kurt .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2023, 152 (01) :4-27
[10]   Threat of racial and economic inequality increases preference for algorithm decision-making [J].
Bigman, Yochanan E. ;
Yam, Kai Chi ;
Marciano, Deborah ;
Reynolds, Scott J. ;
Gray, Kurt .
COMPUTERS IN HUMAN BEHAVIOR, 2021, 122 (122)