Transparent internal human-machine interfaces in highly automated shuttles to support the communication of minimal risk maneuvers to the passengers

被引:0
作者
Brandt, Thorben [1 ]
Wilbrink, Marc [1 ]
Oehl, Michael [1 ]
机构
[1] German Aerosp Ctr DLR, Inst Transportat Syst, Braunschweig, Germany
关键词
Human-computer interaction; Highly automated vehicles; Remote operation; Transparency; TRUST; INFORMATION; EXPERIENCE;
D O I
10.1016/j.trf.2024.09.006
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
In Highly Automated Vehicles (HAVs) without operators on-board, user interaction with the vehicle automation plays an important role for a safe and inclusive use of these services. Especially when Minimal Risk Maneuvers (MRM) are performed by the system, passengers are faced with uncertain situations. A possibility to deepen passenger's understanding and predictability of these system s and reduce their uncertainties is to enhance automation transparency. However, literature shows a lack regarding enhancing system transparency of HAVs during MRMs. Therefore, we investigated the impact of "observability" and "reasoning" as transparency influencing factors. In an online study, participants evaluated multiple internal Human-Machine Interfaces (iHMI) as shuttle passengers. The presented iHMIs varied regarding their level of transparency by giving different information about what the vehicle's "perception" and its "reasoning" is. Results show significant differences in the passengers' understanding between different iHMI variants providing evidence that information regarding the "perception" and "reasoning" of HAVs enhance system transparency. Results of the study may provide first insights into passengers' informational needs when using HAV. They highlight the potential benefits of system transparency when designing interfaces for HMIs of automated vehicles.
引用
收藏
页码:275 / 287
页数:13
相关论文
共 60 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Atakishiyev S., 2021, Explainable artificial intelligence for autonomous driving: a comprehensive overview and field guide for future research directions
  • [3] IRONIES OF AUTOMATION
    BAINBRIDGE, L
    [J]. AUTOMATICA, 1983, 19 (06) : 775 - 779
  • [4] Bawden D., 2014, Oxford research encyclopedia of politics
  • [5] Bootstrap resampling approaches for repeated measure designs: Relative robustness to sphericity and normality violations
    Berkovits, I
    Hancock, GR
    Nevitt, J
    [J]. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 2000, 60 (06) : 877 - 892
  • [6] Bitzer T., 2021, 27 AM C INF SYST MON
  • [7] Selection for cognitive control: A functional magnetic resonance imaging study on the selection of task-relevant information
    Brass, M
    von Cramon, DY
    [J]. JOURNAL OF NEUROSCIENCE, 2004, 24 (40) : 8847 - 8852
  • [8] The Effects of Example-Based Explanations in a Machine Learning Interface
    Cai, Carrie J.
    Jongejan, Jonas
    Holbrook, Jess
    [J]. PROCEEDINGS OF IUI 2019, 2019, : 258 - 262
  • [9] Experimental methods: Between-subject and within-subject design
    Charness, Gary
    Gneezy, Uri
    Kuhn, Michael A.
    [J]. JOURNAL OF ECONOMIC BEHAVIOR & ORGANIZATION, 2012, 81 (01) : 1 - 8
  • [10] Chen J. Y., 2014, Situation Awareness Based Agent Transparency, P36