Trust in hybrid human-automated decision-support

被引:7
|
作者
Kares, Felix [1 ,3 ]
Koenig, Cornelius J. [1 ]
Bergs, Richard [1 ]
Protzel, Clea [1 ]
Langer, Markus [2 ]
机构
[1] Univ Saarland, Fachrichtung Psychol, Saarbrucken, Germany
[2] Philipps Univ Marburg, Fachbereich Psychol, Marburg, Germany
[3] Univ Saarland, Fachrichtung Psychol, Arbeits & Org Psychol, Campus A1 3, D-66123 Saarbrucken, Germany
关键词
artificial intelligence; decision-support; human-automation collaboration; personnel selection; trust; BIAS; RESOURCE; JUSTICE; PEOPLE;
D O I
10.1111/ijsa.12423
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
Research has examined trust in humans and trust in automated decision support. Although reflecting a likely realization of decision support in high-risk tasks such as personnel selection, trust in hybrid human-automation teams has thus far received limited attention. In two experiments (N-1 = 170, N-2 = 154) we compare trust, trustworthiness, and trusting behavior for different types of decision-support (automated, human, hybrid) across two assessment contexts (personnel selection, bonus payments). We additionally examined a possible trust violation by presenting one group of participants a preselection that included predominantly male candidates, thus reflecting possible unfair bias. Whereas fully-automated decisions were trusted less, results suggest that trust in hybrid decision support was similar to trust in human-only support. Trust violations were not perceived differently based on the type of support. We discuss theoretical (e.g., trust in hybrid support) and practical implications (e.g., keeping humans in the loop to prevent negative reactions).
引用
收藏
页码:388 / 402
页数:15
相关论文
共 50 条
  • [1] Human-automated planner collaboration in complex resource allocation decision support systems
    Cummings, M. L.
    Bruni, Sylvain
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2010, 4 (02): : 101 - 114
  • [2] DECISION TECHNOLOGY AND INTELLIGENT DECISION-SUPPORT
    BUNN, DW
    SILVERMAN, BG
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 1995, 84 (01) : 1 - 4
  • [3] Trust in Artificial Intelligence: Comparing Trust Processes Between Human and Automated Trustees in Light of Unfair Bias
    Langer, Markus
    Koenig, Cornelius J.
    Back, Caroline
    Hemsing, Victoria
    JOURNAL OF BUSINESS AND PSYCHOLOGY, 2023, 38 (03) : 493 - 508
  • [4] Mental Health, Trust, and Robots: Towards Understanding How Mental Health Mediates Human-Automated System Trust and Reliance
    Crawford, Jordan R.
    Hubbard, Ella-Mae
    Goh, Yee Mey
    ADVANCES IN NEUROERGONOMICS AND COGNITIVE ENGINEERING, 2020, 953 : 119 - 128
  • [5] DECISION-SUPPORT SYSTEMS - SCOPE AND POTENTIAL
    RADERMACHER, FJ
    DECISION SUPPORT SYSTEMS, 1994, 12 (4-5) : 257 - 265
  • [6] Using generative AI as decision-support tools: unraveling users' trust and AI appreciation
    Huynh, Minh-Tay
    JOURNAL OF DECISION SYSTEMS, 2024,
  • [7] SAFETY AND DECISION-SUPPORT IN ONCOLOGY
    HAMMOND, P
    HARRIS, AL
    DAS, SK
    WYATT, JC
    METHODS OF INFORMATION IN MEDICINE, 1994, 33 (04) : 371 - 381
  • [8] Differences in Trust between Human and Automated Decision Aids
    Pearson, Carl J.
    Welk, Allaire K.
    Boettcher, William A.
    Mayer, Roger C.
    Streck, Sean
    Simons-Rudolph, Joseph M.
    Mayhorn, Christopher B.
    SYMPOSIUM AND BOOTCAMP ON THE SCIENCE OF SECURITY, 2016, : 95 - 98
  • [9] Concordance as evidence in the Watson for Oncology decision-support system
    Tupasela, Aaro
    Di Nucci, Ezio
    AI & SOCIETY, 2020, 35 (04) : 811 - 818
  • [10] Concordance as evidence in the Watson for Oncology decision-support system
    Aaro Tupasela
    Ezio Di Nucci
    AI & SOCIETY, 2020, 35 : 811 - 818