How to Interact with a Fully Autonomous Vehicle: Naturalistic Ways for Drivers to Intervene in the Vehicle System While Performing Non-Driving Related Tasks

被引:12
作者
Ataya, Aya [1 ]
Kim, Won [1 ]
Elsharkawy, Ahmed [1 ]
Kim, SeungJun [1 ]
机构
[1] Gwangju Inst Sci & Technol, Sch Integrated Technol, Gwangju 61005, South Korea
关键词
fully autonomous vehicle (FAV); input interactions; non-driving related task (NDRT); intervene vehicle system; BEHAVIOR;
D O I
10.3390/s21062206
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous vehicle technology increasingly allows drivers to turn their primary attention to secondary tasks (e.g., eating or working). This dramatic behavior change thus requires new input modalities to support driver-vehicle interaction, which must match the driver's in-vehicle activities and the interaction situation. Prior studies that addressed this question did not consider how acceptance for inputs was affected by the physical and cognitive levels experienced by drivers engaged in Non-driving Related Tasks (NDRTs) or how their acceptance varies according to the interaction situation. This study investigates naturalistic interactions with a fully autonomous vehicle system in different intervention scenarios while drivers perform NDRTs. We presented an online methodology to 360 participants showing four NDRTs with different physical and cognitive engagement levels, and tested the six most common intervention scenarios (24 cases). Participants evaluated our proposed seven natural input interactions for each case: touch, voice, hand gesture, and their combinations. Results show that NDRTs influence the driver's input interaction more than intervention scenario categories. In contrast, variation of physical load has more influence on input selection than variation of cognitive load. We also present a decision-making model of driver preferences to determine the most natural inputs and help User Experience designers better meet drivers' needs.
引用
收藏
页码:1 / 25
页数:25
相关论文
共 69 条
  • [51] I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving
    Roider, Florian
    Gross, Tom
    [J]. AUTOMOTIVEUI'18: PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS, 2018, : 351 - 358
  • [52] SAE International, Technical report
  • [53] Sauras-Perez P., 2017, WCXT 17 SAE WORLD C, DOI [10.4271/2017-01-0068, DOI 10.4271/2017-01-0068]
  • [54] Exploring User Needs and Design Requirements in Fully Automated Vehicles
    Lee, Seul Chan
    Nadri, Chihab
    Sanghavi, Harsh
    Jeon, Myounghoon
    [J]. CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
  • [55] RATIONAL CHOICE AND THE STRUCTURE OF THE ENVIRONMENT
    SIMON, HA
    [J]. PSYCHOLOGICAL REVIEW, 1956, 63 (02) : 129 - 138
  • [56] Turing in the driver's seat: Can people distinguish between automated and manually driven vehicles?
    Stanton, Neville A.
    Eriksson, Alexander
    Banks, Victoria A.
    Hancock, Peter A.
    [J]. HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, 2020, 30 (06) : 418 - 425
  • [57] Using Time and Space Efficiently in Driverless Cars: Findings of a Co-Design Study
    Stevens, Gunnar
    Bossauer, Paul
    Vonholdt, Stephanie
    Pakusch, Christina
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [58] Tan H., 2019, P 11 INT C AUT US IN, P104, DOI [10.1145/3349263.3351340, DOI 10.1145/3349263.3351340]
  • [59] Tscharn Robert, 2017, P 19 ACM INT C MULT, P91, DOI 10.1145/3136755.3136787
  • [60] Multimodal Input in the Car, Today and Tomorrow
    Vetro, Anthony
    [J]. IEEE MULTIMEDIA, 2011, 18 (01) : 98 - 103