Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers?

被引:62
作者
Leyer, Michael [1 ,2 ]
Schneider, Sabrina [3 ,4 ]
机构
[1] Univ Rostock, Ulmenstr 69, D-18057 Rostock, Germany
[2] Queensland Univ Technol, Brisbane, Qld, Australia
[3] Univ Kassel, Kleine Rosenstr 3, D-34117 Kassel, Germany
[4] Entrpreneurial Sch, MCI, Univ Str 15, A-6020 Innsbruck, Austria
关键词
Artificial intelligence; Managerial job design; Decision-making; Augmentation; Automation; ALGORITHMS; PEOPLE;
D O I
10.1016/j.bushor.2021.02.026
中图分类号
F [经济];
学科分类号
02 ;
摘要
Artificial intelligence (AI) has emerged as a promising and increasingly available technology for managerial decision-making. With the adoption of AIenabled software, organizations can leverage various benefits of the technology, but they also have to consider the intended and unintended consequences of using the technology for managerial roles. It is still unclear whether managers will benefit from enhancing their abilities with AI-enabled software or become powerless puppets that do more than announce AI-enabled software results. Our research has revealed distinct ways in which organizations can use AI-enabled decision-making solutions: as tools or novelties, for decision augmentation or automation, and as either a voluntary or a mandatory option. In this article, we discuss the implications of each of these combinations on the relevant managers. We consider outcomes related to managerial job design and derive practical advice for organizational designers and managers who work with AI. Our outcomes provide guidance on how to deal with the conflict-riddled relationship between managers and technology with regard to capabilities, responsibilities, and acceptance of AI-enabled software. (c) 2021 Kelley School of Business, Indiana University. Published by Elsevier Inc. All rights reserved.
引用
收藏
页码:711 / 724
页数:14
相关论文
共 59 条
  • [1] Why trust an algorithm? Performance, cognition, and neurophysiology
    Alexander, Veronika
    Blinder, Collin
    Zak, Paul J.
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2018, 89 : 279 - 288
  • [2] Allas T., 2018, CROSSING FRONTIER AP
  • [3] [Anonymous], 2011, Thinking, DOI [DOI 10.1037/H0099210, 10.1037/h0099210]
  • [4] [Anonymous], 2017, Algorithms in the criminal justice system: assessing the use of risk assessments in sentencing
  • [5] Big Data's Disparate Impact
    Barocas, Solon
    Selbst, Andrew D.
    [J]. CALIFORNIA LAW REVIEW, 2016, 104 (03) : 671 - 732
  • [6] Bell D.E., 1988, DECISION MAKING, P9, DOI DOI 10.1017/CBO9780511598951.003
  • [7] Browning J. G, 2019, COMPUTER INTERNET LA, V36, P1
  • [8] How the machine 'thinks': Understanding opacity in machine learning algorithms
    Burrell, Jenna
    [J]. BIG DATA & SOCIETY, 2016, 3 (01): : 1 - 12
  • [9] A systematic review of algorithm aversion in augmented decision making
    Burton, Jason W.
    Stein, Mari-Klara
    Jensen, Tina Blegind
    [J]. JOURNAL OF BEHAVIORAL DECISION MAKING, 2020, 33 (02) : 220 - 239
  • [10] Does motivation matter? The influence of the agency perspective on temporary agency workers
    Chen, Pei-Chen
    Wang, Ming-Chao
    Fang, Shih-Chieh
    [J]. EMPLOYEE RELATIONS, 2017, 39 (04) : 561 - 581