Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing

被引:55
作者
Bauer, Kevin [1 ]
von Zahn, Moritz [2 ]
Hinz, Oliver [2 ]
机构
[1] Univ Mannheim, Informat Syst Dept, D-68161 Mannheim, Germany
[2] Goethe Univ, Informat Syst Dept, D-60323 Frankfurt, Germany
关键词
explainable artificial intelligence; user behavior; information processing; mental models; THEORETICAL FOUNDATIONS; MACHINE; EXPLANATIONS; SYSTEMS; PERSPECTIVES; ALGORITHMS; LOOKING; EXPERT;
D O I
10.1287/isre.2023.1199
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Because of a growing number of initiatives and regulations, predictions of modern artificial intelligence (AI) systems increasingly come with explanations about why they behave the way they do. In this paper, we explore the impact of feature-based explanations on users' information processing. We designed two complementary empirical studies where participants either made incentivized decisions on their own, with the aid of opaque predictions, or with explained predictions. In Study 1, laypeople engaged in the deliberately abstract investment game task. In Study 2, experts from the real estate industry estimated listing prices for real German apartments. Our results indicate that the provision of feature based explanations paves the way for AI systems to reshape users' sense making of information and understanding of the world around them. Specifically, explanations change users' situational weighting of available information and evoke mental model adjustments. Crucially, mental model adjustments are subject to the confirmation bias so that misconceptions can persist and even accumulate, possibly leading to suboptimal or biased decisions. Additionally, mental model adjustments create spillover effects that alter user behavior in related yet disparate domains. Overall, this paper provides important insights into potential downstream consequences of the broad employment of modern explainable AI methods. In particular, side effects of mental model adjustments present a potential risk of manipulating user behavior, promoting discriminatory inclinations, and increasing noise in decision making. Our findings may inform the refinement of current efforts of companies building AI systems and regulators that aim to mitigate problems associated with the black-box nature of many modern AI systems.
引用
收藏
页码:1582 / 1602
页数:22
相关论文
共 92 条
  • [81] Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing
    Senoner, Julian
    Netland, Torbjorn
    Feuerriegel, Stefan
    [J]. MANAGEMENT SCIENCE, 2022, 68 (08) : 5704 - 5723
  • [82] Shapley Lloyd S., 1953, Contributions to the Theory of Games II, P307, DOI [10.1515/9781400881970-018, DOI 10.1515/9781400881970-018]
  • [83] Sibony O, 2021, Noise: A Flaw in Human Judgment
  • [84] TEODORESCU MHM, 2021, MIS Q MANAGEMENT INF, V45, P1483, DOI DOI 10.25300/MISQ/2021/16535
  • [85] Human-computer collaboration for skin cancer recognition
    Tschandl, Philipp
    Rinner, Christoph
    Apalla, Zoe
    Argenziano, Giuseppe
    Codella, Noel
    Halpern, Allan
    Janda, Monika
    Lallas, Aimilios
    Longo, Caterina
    Malvehy, Josep
    Paoli, John
    Puig, Susana
    Rosendahl, Cliff
    Soyer, H. Peter
    Zalaudek, Iris
    Kittler, Harald
    [J]. NATURE MEDICINE, 2020, 26 (08) : 1229 - +
  • [86] WHEN THE MACHINE MEETS THE EXPERT: AN ETHNOGRAPHY OF DEVELOPING AI FOR HIRING
    van den Broek, Elmira
    Sergeeva, Anastasia
    Huysman, Marleen
    [J]. MIS QUARTERLY, 2021, 45 (03) : 1557 - 1580
  • [87] Information acquisition and mental models: An investigation into the relationship between behaviour and learning
    Vandenbosch, B
    Higgins, C
    [J]. INFORMATION SYSTEMS RESEARCH, 1996, 7 (02) : 198 - 214
  • [88] Notions of explainability and evaluation approaches for explainable artificial intelligence
    Vilone, Giulia
    Longo, Luca
    [J]. INFORMATION FUSION, 2021, 76 : 89 - 106
  • [89] Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs
    Wang, Weiquan
    Benbasat, Izak
    [J]. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS, 2007, 23 (04) : 217 - 246
  • [90] Willison R, 2013, MIS QUART, V37, P1