AI Decision Making with Dignity? Contrasting Workers' Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context

被引:80
作者
Bankins, Sarah [1 ]
Formosa, Paul [2 ]
Griep, Yannick [3 ]
Richards, Deborah [4 ]
机构
[1] Macquarie Univ, Macquarie Business Sch, Dept Management, North Ryde Campus, Sydney, NSW 2109, Australia
[2] Macquarie Univ, Fac Arts, Dept Philosophy, North Ryde Campus, Sydney, NSW 2109, Australia
[3] Radboud Univ Nijmegen, Behav Sci Inst, Postbus 9104, NL-6500 HE Nijmegenn, Netherlands
[4] Macquarie Univ, Fac Sci & Engn, Dept Comp, North Ryde Campus, Sydney, NSW 2109, Australia
关键词
Artificial intelligence; Human resource management; Algorithmic management; Ethical AI; Artificial intelligence at work; Interactional justice; DEHUMANIZATION; WORKPLACE;
D O I
10.1007/s10796-021-10223-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals' experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriateness. In terms of decision makers, the use of human decision makers over AIs generally resulted in better perceptions of respectful treatment. In terms of decision valence, people experiencing positive over negative decisions generally resulted in better perceptions of respectful treatment. In instances where these cases conflict, on some indicators people preferred positive AI decisions over negative human decisions. Qualitative responses show how people identify justice concerns with both AI and human decision making. We outline implications for theory, practice, and future research.
引用
收藏
页码:857 / 875
页数:19
相关论文
共 64 条
[1]   Best Practice Recommendations for Designing and Implementing Experimental Vignette Methodology Studies [J].
Aguinis, Herman ;
Bradley, Kyle J. .
ORGANIZATIONAL RESEARCH METHODS, 2014, 17 (04) :351-371
[2]   Designing for human rights in AI [J].
Aizenberg, Evgeni ;
van den Hoven, Jeroen .
BIG DATA & SOCIETY, 2020, 7 (02)
[3]   NEPOTISM, FAVORITISM AND CRONYISM: A STUDY OF THEIR EFFECTS ON JOB STRESS AND JOB SATISFACTION IN THE BANKING INDUSTRY OF NORTH CYPRUS [J].
Arasli, Huseyin ;
Tumer, Mustafa .
SOCIAL BEHAVIOR AND PERSONALITY, 2008, 36 (09) :1237-1250
[4]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[5]  
Balasubramanian S., 2021, FORBES
[6]   OUTCOME BIAS IN DECISION EVALUATION [J].
BARON, J ;
HERSHEY, JC .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1988, 54 (04) :569-579
[7]   Experiencing Dehumanization: Cognitive and Emotional Effects of Everyday Dehumanization [J].
Bastian, Brock ;
Haslam, Nick .
BASIC AND APPLIED SOCIAL PSYCHOLOGY, 2011, 33 (04) :295-303
[8]   The viability of crowdsourcing for survey research [J].
Behrend, Tara S. ;
Sharek, David J. ;
Meade, Adam W. ;
Wiebe, Eric N. .
BEHAVIOR RESEARCH METHODS, 2011, 43 (03) :800-813
[9]  
Bies R.J., 2001, Advances in organization justice, P89
[10]  
Bies R.J., 1986, Research on negotiation in organizations, V1, P43, DOI DOI 10.1111/J.1559-1816.2004.TB02581.X