How does people's trust in automated vehicles change after automation errors occur? An empirical study on dynamic trust in automated driving

被引:0
作者
Tan, Hao [1 ,2 ]
Hao, Yuyue [2 ]
机构
[1] State Key Lab Adv Design & Mfg Vehicle Body, Changsha, Peoples R China
[2] Hunan Univ, Sch Design, Changsha 410082, Peoples R China
关键词
automation error; human-machine interaction; human-vehicle interaction; perceived risk; trust; PERCEIVED RISK; ACCEPTANCE; BEHAVIOR; SAFETY;
D O I
10.1002/hfm.21001
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
When errors of automated vehicles (AVs) occur, drivers' trust can easily be destroyed, resulting in the reduction of the use of AVs. This study aims to examine how error of AVs declines driver's trust by impacting their subjective perceptions. A driving simulator experiment is conducted, in which 104 participants (male = 58; female = 46) experienced automated driving with automation errors and rated their trust. The results indicate that automation error will affect the driver's perceived predictability, perceived reliability, and perceived safety, which will lead to the decline of trust and abandonment of automated driving. With the occurrence of automation error of AVs, perceived safety plays a more critical role in drivers' trust. In addition, when automation errors occur in specific tasks with low risk, the trust of drivers will drop faster than that in high-risk tasks. This paper has explored the internal effects of the decline of driver's trust after automation errors of AVs, and further considers the influence of different external risks on these perception factors and trust. This study can help AVs manufacturers to formulate different degrees of trust repair strategies according to different driving tasks and accident severity.
引用
收藏
页码:449 / 463
页数:15
相关论文
共 56 条
  • [1] Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures
    Akalin, Neziha
    Kristoffersson, Annica
    Loutfi, Amy
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2022, 158
  • [2] [Anonymous], 2013, PREL STAT POL AUT VE
  • [3] Assessing safety critical braking events in naturalistic driving studies
    Bagdadi, Omar
    [J]. TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2013, 16 : 117 - 126
  • [4] Bahit M, 2016, PROCEEDINGS OF 2016 8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING (ICITEE)
  • [5] Does projection into use improve trust and exploration? An example with a cruise control system
    Cahour, Beatrice
    Forzy, Jean-Francois
    [J]. SAFETY SCIENCE, 2009, 47 (09) : 1260 - 1270
  • [6] Chien S.-Y., 2016, 2016 IEEE INT C SYST
  • [7] A note of caution regarding anthropomorphism in HCI agents
    Culley, Kimberly E.
    Madhavan, Poornima
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2013, 29 (03) : 577 - 579
  • [8] The role of trust in automation reliance
    Dzindolet, MT
    Peterson, SA
    Pomranky, RA
    Pierce, LG
    Beck, HP
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 58 (06) : 697 - 718
  • [9] Trust in automation - Before and after the experience of take-over scenarios in a highly automated vehicle
    Gold, Christian
    Koerber, Moritz
    Hohenberger, Christoph
    Lechner, David
    Bengler, Klaus
    [J]. 6TH INTERNATIONAL CONFERENCE ON APPLIED HUMAN FACTORS AND ERGONOMICS (AHFE 2015) AND THE AFFILIATED CONFERENCES, AHFE 2015, 2015, 3 : 3025 - 3032
  • [10] Supporting Trust in Autonomous Driving
    Haeuslschmid, Renate
    von Buelow, Max
    Pfleging, Bastian
    Butz, Andreas
    [J]. IUI'17: PROCEEDINGS OF THE 22ND INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2017, : 319 - 329