Context-Adaptive Management of Drivers' Trust in Automated Vehicles

被引:14
作者
Azevedo-Sa, Hebert [1 ]
Jayaraman, Suresh Kumaar [2 ]
Yang, X. Jessie [1 ]
Robert, Lionel P., Jr. [1 ]
Tilbury, Dawn M. [2 ]
机构
[1] Univ Michigan, Robot Inst, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Mech Engn Dept, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
Intelligent transportation systems; social human-robot interaction; human factors and human-in-the-loop; SELF-CONFIDENCE;
D O I
10.1109/LRA.2020.3025736
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive drivers' trust from drivers' behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of drivers' trust in AVs. The framework is based on the identification of trust miscalibrations (when drivers' undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driver-AV teams.
引用
收藏
页码:6908 / 6915
页数:8
相关论文
共 33 条
  • [1] A Classification Model for Sensing Human Trust in Machines Using EEG and GSR
    Akash, Kumar
    Hu, Wan-Lin
    Jain, Neera
    Reid, Tahira
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2018, 8 (04)
  • [2] Comparing the Effects of False Alarms and Misses on Humans' Trust in (Semi)Autonomous Vehicles
    Azevedo-Sa, Hebert
    Jayaraman, Suresh Kumaar
    Esterwood, Connor T.
    Yang, X. Jessie
    Robert, Lionel P., Jr.
    Tilbury, Dawn M.
    [J]. HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 113 - 115
  • [3] Real-Time Estimation of Drivers' Trust in Automated Driving Systems
    Azevedo-Sa, Hebert
    Jayaraman, Suresh Kumaar
    Esterwood, Connor T.
    Yang, X. Jessie
    Robert, Lionel P., Jr.
    Tilbury, Dawn M.
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2021, 13 (08) : 1911 - 1927
  • [4] Barber B., 1983, LOGIC LIMITS TRUST, V96
  • [5] Calkins H, 2017, J ARRYTHM, V33, P369, DOI 10.1016/j.joa.2017.08.001
  • [6] Castelfranchi C., 2010, Trust Theory: A Socio-Cognitive and Computational Model, DOI DOI 10.1002/9780470519851
  • [7] Hoff K., 2013, CHI 13 HUM FACT COMP
  • [8] Computational Modeling of the Dynamics of Human Trust During Human-Machine Interactions
    Hu, Wan-Lin
    Akash, Kumar
    Reid, Tahira
    Jain, Neera
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2019, 49 (06) : 485 - 497
  • [9] Jian J.Y., 2000, Int. J. Cogn. Ergon., V4, P53, DOI [10.1207/S15327566IJCE040104, DOI 10.1207/S15327566IJCE040104]
  • [10] TRUST, CONTROL STRATEGIES AND ALLOCATION OF FUNCTION IN HUMAN MACHINE SYSTEMS
    LEE, J
    MORAY, N
    [J]. ERGONOMICS, 1992, 35 (10) : 1243 - 1270