Toward Adaptive Trust Calibration for Level 2 Driving Automation

被引:26
作者
Akash, Kumar [1 ]
Jain, Neera [1 ]
Misu, Teruhisa [2 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Honda Res Inst USA Inc, San Jose, CA USA
来源
PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2020 | 2020年
关键词
user modeling; HMI for automated driving; trust calibration; HUMAN-MACHINE COLLABORATION; TRANSPARENCY-BASED FEEDBACK; SELF-CONFIDENCE; HUMANS; ALLOCATION; DESIGN; MODEL;
D O I
10.1145/3382507.3418885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.
引用
收藏
页码:538 / 547
页数:10
相关论文
共 63 条
[1]  
Akash K, 2020, Arxiv, DOI arXiv:2006.16353
[2]   Improving Human-Machine Collaboration Through Transparency-based Feedback - Part I: Human Trust and Workload Model [J].
Akash, Kumar ;
Polson, Katelyn ;
Reid, Tahira ;
Jain, Neera .
IFAC PAPERSONLINE, 2019, 51 (34) :315-321
[3]   Improving Human-Machine Collaboration Through Transparency-based Feedback - Part II: Control Design and Synthesis [J].
Akash, Kumar ;
Reid, Tahira ;
Jain, Neera .
IFAC PAPERSONLINE, 2019, 51 (34) :322-328
[4]   A Classification Model for Sensing Human Trust in Machines Using EEG and GSR [J].
Akash, Kumar ;
Hu, Wan-Lin ;
Jain, Neera ;
Reid, Tahira .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2018, 8 (04)
[5]  
Akash K, 2017, P AMER CONTR CONF, P1542, DOI 10.23919/ACC.2017.7963172
[6]   System Transparency in Shared Autonomy: A Mini Review [J].
Alonso, Victoria ;
de la Puente, Paloma .
FRONTIERS IN NEUROROBOTICS, 2018, 12
[7]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[8]  
Burnham KP., 2002, MODEL SELECTION MULT, DOI DOI 10.1007/B97636
[9]  
CASSANDRA AR, 1994, PROCEEDINGS OF THE TWELFTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, P1023
[10]  
Chen Jessie Y, 2014, Transparency. Technical Situation AwarenessBased Agent Report