The Effects of Transparency and Reliability of In-Vehicle Intelligent Agents on Driver Perception, Takeover Performance, Workload and Situation Awareness in Conditionally Automated Vehicles

被引:9
作者
Zang, Jing [1 ]
Jeon, Myounghoon [1 ]
机构
[1] Virginia Tech, Dept Ind & Syst Engn, Blacksburg, VA 24061 USA
关键词
automated vehicle; explainable AI; situation awareness (SA); transparency; reliability; trust; TRUST; FEEDBACK; SAFETY; MODEL;
D O I
10.3390/mti6090082
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the context of automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to driver perception, situation awareness (SA), and driving performance. However, the effects of agent transparency on driver performance when the agent is unreliable have not been fully examined yet. This paper examined how transparency and reliability of the IVIAs affect drivers' perception of the agent, takeover performance, workload and SA. A 2 x 2 mixed factorial design was used in this study, with transparency (Push: proactive vs. Pull: on-demand) as a within-subjects variable and reliability (high vs. low) as a between-subjects variable. In a driving simulator, 27 young drivers drove with two types of in-vehicle agents during the conditionally automated driving. Results suggest that transparency influenced participants' perception on the agent and perceived workload. High reliability agent was associated with higher situation awareness and less effort, compared to low reliability agent. There was an interaction effect between transparency and reliability on takeover performance. These findings could have important implications for the continued design and development of IVIAs of the automated vehicle system.
引用
收藏
页数:19
相关论文
共 52 条
[51]  
Wiegand G., 2020, P 22 INT C HUMAN COM
[52]   Agent Transparency and Reliability in Human-Robot Interaction: The Influence on User Confidence and Perceived Reliability [J].
Wright, Julia L. ;
Chen, Jessie Y. C. ;
Lakhmani, Shan G. .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2020, 50 (03) :254-263