Improving Human-Machine Collaboration Through Transparency-based Feedback - Part I: Human Trust and Workload Model

被引:29
作者
Akash, Kumar [1 ]
Polson, Katelyn [1 ]
Reid, Tahira [1 ]
Jain, Neera [1 ]
机构
[1] Purdue Univ, Sch Mech Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
trust in automation; human-machine interface; intelligent machines; Markov decision processes; stochastic modeling; parameter estimation; dynamic behavior; AUTOMATION;
D O I
10.1016/j.ifacol.2019.01.028
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we establish a partially observable Markov decision process (POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system's reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
引用
收藏
页码:315 / 321
页数:7
相关论文
共 29 条
[1]  
Akash K, 2018, 2 IFAC C CYB PHYS HU
[2]  
Akash K, 2017, P AMER CONTR CONF, P1542, DOI 10.23919/ACC.2017.7963172
[3]  
Amazon, 2005, Amazon Mechanical Turk
[4]  
[Anonymous], 2014, Markov decision processes: discrete stochastic dynamic programming
[5]  
[Anonymous], 2003, IJCAI, DOI DOI 10.5555/1630659.1630806
[6]  
[Anonymous], 2014, THESIS
[7]  
CASSANDRA AR, 1994, PROCEEDINGS OF THE TWELFTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, P1023
[8]  
ElSalamouny Ehab, 2009, Formal Aspects in Security and Trust. 6th International Workshop, FAST 2009. Revised Selected Papers, P21
[9]   Measurement of trust in human-robot collaboration [J].
Freedy, Amos ;
DeVisser, Ewart ;
Weltman, Gershon ;
Coeyman, Nicole .
CTS 2007: PROCEEDINGS OF THE 2007 INTERNATIONAL SYMPOSIUM ON COLLABORATIVE TECHNOLOGIES AND SYSTEMS, 2007, :106-114
[10]  
Hu W. L, 2018, IEEE T HUMAN MACHINE