How can companies handle paradoxes to enhance trust in artificial intelligence solutions? A qualitative research

被引:0
作者
Bakonyi, Zoltan [1 ]
机构
[1] Boston Consulting Grp Inc, Budapest, Hungary
关键词
Machine learning; Artificial intelligence; AI implementation; AI success factors; Human-machine trust; Trust paradoxes; MANAGEMENT;
D O I
10.1108/JOCM-01-2023-0026
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
PurposeExploring trust's impact on AI project success. Companies can't leverage AI without employee trust. While analytics features like speed and precision can build trust, they may also lower it during implementation, leading to paradoxes. This study identifies these paradoxes and proposes strategies to manage them.Design/methodology/approachThis paper applies a grounded theory approach based on 35 interviews with senior managers, users, and implementers of analytics solutions of large European companies.FindingsIt identifies seven paradoxes, namely, knowledge substitution, task substitution, domain expert, time, error, reference, and experience paradoxes and provides some real-life examples of managing them.Research limitations/implicationsThe limitations of this paper include its focus on machine learning projects from the last two years, potentially overlooking longer-term trends. The study's micro-level perspective on implementation projects may limit broader insights, and the research primarily examines European contexts, potentially missing out on global perspectives. Additionally, the qualitative methodology used may limit the generalizability of findings. Finally, while the paper identifies trust paradoxes, it does not offer an exhaustive exploration of their dynamics or quantitative measurements of their strength.Practical implicationsSeveral tactics to tackle trust paradoxes in AI projects have been identified, including a change roadmap, data "load tests", early expert involvement, model descriptions, piloting, plans for machine-human cooperation, learning time, and a backup system. Applying these can boost trust in AI, giving organizations an analytical edge.Social implicationsThe AI-driven digital transformation is inevitable; the only question is whether we will lead, participate, or fall behind. This paper explores how organizations can adapt to technological changes and how employees can leverage AI to enhance efficiency with minimal disruption.Originality/valueThis paper offers a theoretical overview of trust in analytics and analyses over 30 interviews from real-life analytics projects, contributing to a field typically dominated by statistical or anecdotal evidence. It provides practical insights with scientific rigour derived from the interviews and the author's nearly decade-long consulting career.
引用
收藏
页码:1405 / 1426
页数:22
相关论文
共 39 条
  • [1] Agrawal A., 2018, PREDICTION MACHINES
  • [2] Beauchene V., 2023, AI at work: What people are saying
  • [3] Brynjolfsson E., 2023, Generative AI at Work.
  • [4] Brynjolfsson E., 2019, The economics of artificial intelligence: An agenda, P23
  • [5] Brynjolfsson E., 2017, HARVARD BUS REV, P1
  • [6] Candelon F., 2023, How people can create-and destroy-value with generative AI
  • [7] Charmaz K., 2006, Constructing grounded theory
  • [8] The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
    Chatzimparmpas, A.
    Martins, R. M.
    Jusufi, I.
    Kucher, K.
    Rossi, F.
    Kerren, A.
    [J]. COMPUTER GRAPHICS FORUM, 2020, 39 (03) : 713 - 756
  • [9] Daugherty P. R., 2018, Human+ machine: Reimagining work in the age of AI
  • [10] Davenport T.H., 2007, Journal of the Association for Information Systems