Human-AI joint task performance: Learning from uncertainty in autonomous driving systems

被引:10
作者
Constantinides, Panos [1 ]
Monteiro, Eric [2 ]
Mathiassen, Lars [3 ]
机构
[1] Univ Manchester, Alliance Manchester Business Sch, Manchester, England
[2] Norwegian Univ Sci & Technol, Gjovik, Norway
[3] Georgia State Univ, Atlanta, GA USA
关键词
AI systems; Human -AI joint task performance; Uncertainty; Learning; Tesla; Autonomous driving systems; SITUATION AWARENESS; AUTOMATION; ALGORITHMS; MODEL; EXPLOITATION; TRUST; TIME; LOOP; ROAD;
D O I
10.1016/j.infoandorg.2024.100502
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives. In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including uncontrolled automation, limited automation, expanded automation, and controlled automation are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.
引用
收藏
页数:19
相关论文
共 66 条
  • [1] [Anonymous], 2014, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems
  • [2] THE NEXT GENERATION OF RESEARCH ON IS USE: A THEORETICAL FRAMEWORK OF DELEGATION TO AND FROM AGENTIC IS ARTIFACTS
    Baird, Aaron
    Maruping, Likoebe M.
    [J]. MIS QUARTERLY, 2021, 45 (01) : 315 - 341
  • [3] SUBSTITUTING HUMAN DECISION-MAKING WITH MACHINE LEARNING: IMPLICATIONS FOR ORGANIZATIONAL LEARNING
    Balasubramanian, Natarajan
    Ye, Yang
    Xu, Mingtao
    [J]. ACADEMY OF MANAGEMENT REVIEW, 2022, 47 (03) : 448 - 465
  • [4] Is partially automated driving a bad idea? Observations from an on-road study
    Banks, Victoria A.
    Eriksson, Alexander
    O'Donoghue, Jim
    Stanton, Neville A.
    [J]. APPLIED ERGONOMICS, 2018, 68 : 138 - 145
  • [5] Sub-systems on the road to vehicle automation: Hands and feet free but not 'mind' free driving
    Banks, Victoria A.
    Stanton, Neville A.
    Harvey, Catherine
    [J]. SAFETY SCIENCE, 2014, 62 : 505 - 514
  • [6] Bauchwitz B. C., 2020, EVALUATING RELIABILI
  • [7] Benbya H., 2020, MIS quarterly, V44, P1, DOI [DOI 10.25300/MISQ/2020/13304, 10.25300/MISQ/2020/13304]
  • [8] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Bender, Emily M.
    Gebru, Timnit
    McMillan-Major, Angelina
    Shmitchell, Shmargaret
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 610 - 623
  • [9] Berente N., 2021, MIS QUART, V45, P1433, DOI [DOI 10.25300/MISQ/2021/16274, 10.25300/MISQ/2021/16274]
  • [10] Age, skill, and hazard perception in driving
    Borowsky, Avinoam
    Shinar, David
    Oron-Gilad, Tal
    [J]. ACCIDENT ANALYSIS AND PREVENTION, 2010, 42 (04) : 1240 - 1249