HEX: Human-in-the-loop explainability via deep reinforcement learning

被引:0
作者
Lash, Michael T. [1 ]
机构
[1] Univ Kansas, Sch Business, Analyt Informat & Operat Area, 1654 Naismith Dr, Lawrence, KS 66045 USA
关键词
Explainability; Interpretability; Human-in-the-loop; Deep reinforcement learning; Machine learning; Behavioral machine learning; Decision support; EXPLANATIONS; ALGORITHMS; MODELS;
D O I
10.1016/j.dss.2024.114304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person - not a machine - must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider's preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using proposed explanatory point mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider's trust and tendency to rely on trusted features.
引用
收藏
页数:12
相关论文
共 51 条
[1]   Auditing black-box models for indirect influence [J].
Adler, Philip ;
Falk, Casey ;
Friedler, Sorelle A. ;
Nix, Tionney ;
Rybeck, Gabriel ;
Scheidegger, Carlos ;
Smith, Brandon ;
Venkatasubramanian, Suresh .
KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) :95-122
[2]  
Alvarez-Melis D, 2018, ADV NEUR IN, V31
[3]  
Arnold V, 2006, MIS QUART, V30, P79
[4]  
Biecek P., 2021, Explanatory Model Analysis
[5]  
Burkart N, 2021, J ARTIF INTELL RES, V70, P245
[6]  
De A, 2021, AAAI CONF ARTIF INTE, V35, P5905
[7]   Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
MANAGEMENT SCIENCE, 2018, 64 (03) :1155-1170
[8]  
Doshi-Velez F, 2019, Arxiv, DOI [arXiv:1711.01134, 10.48550/ARXIV.1711.01134]
[9]  
Druce Jeff, 2021, arXiv
[10]  
Fern A, 2019, P IJCAI ECAI WORKSH, P47