Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic review

被引:0
作者
Gomez, Catalina [1 ]
Cho, Sue Min [1 ]
Ke, Shichang [1 ]
Huang, Chien-Ming [1 ]
Unberath, Mathias [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
来源
FRONTIERS IN COMPUTER SCIENCE | 2025年 / 6卷
关键词
artificial intelligence; human-AI interaction; decision-making; interaction patterns; interactivity;
D O I
10.3389/fcomp.2024.1521066
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Leveraging Artificial Intelligence (AI) in decision support systems has disproportionately focused on technological advancements, often overlooking the alignment between algorithmic outputs and human expectations. A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes. Determining what information AI should provide to aid humans is vital, a concept underscored by explainable AI's efforts to justify AI predictions. However, how the information is presented, e.g., the sequence of recommendations and solicitation of interpretations, is equally crucial as complex interactions may emerge between humans and AI. While empirical studies have evaluated human-AI dynamics across domains, a common vocabulary for human-AI interaction protocols is lacking. To promote more deliberate consideration of interaction designs, we introduce a taxonomy of interaction patterns that delineate various modes of human-AI interactivity. We summarize the results of a systematic review of AI-assisted decision making literature and identify trends and opportunities in existing interactions across application domains from 105 articles. We find that current interactions are dominated by simplistic collaboration paradigms, leading to little support for truly interactive functionality. Our taxonomy offers a tool to understand interactivity with AI in decision-making and foster interaction designs for achieving clear communication, trustworthiness, and collaboration.
引用
收藏
页数:15
相关论文
共 129 条
  • [1] In search of a Goldilocks zone for credible AI
    Allan, Kevin
    Oren, Nir
    Hutchison, Jacqui
    Martin, Douglas
    [J]. SCIENTIFIC REPORTS, 2021, 11 (01)
  • [2] Alufaisan Y, 2021, AAAI CONF ARTIF INTE, V35, P6618
  • [3] Power to the People: The Role of Humans in Interactive Machine Learning
    Amershi, Saleema
    Cakmak, Maya
    Knox, W. Bradley
    Kulesza, Todd
    [J]. AI MAGAZINE, 2014, 35 (04) : 105 - 120
  • [4] How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives
    Appelganc, Ksenia
    Rieger, Tobias
    Roesler, Eileen
    Manzey, Dietrich
    [J]. JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2022, 16 (04) : 207 - 221
  • [5] AI-Assisted Human Labeling: Batching for Efficiency without Overreliance
    Ashktorab Z.
    Desmond M.
    Andres J.
    Muller M.
    Joshi N.N.
    Brachman M.
    Sharma A.
    Brimijoin K.
    Pan Q.
    Wolf C.T.
    Duesterwald E.
    Dugan C.
    Geyer W.
    Reimer D.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
  • [6] When Machine and Bandwagon Heuristics Compete: Understanding Users' Response to Conflicting AI and Crowdsourced Fact-Checking
    Banas, John A.
    Palomares, Nicholas A.
    Richards, Adam S.
    Keating, David M.
    Joyce, Nick
    Rains, Stephen A.
    [J]. HUMAN COMMUNICATION RESEARCH, 2022, 48 (03) : 430 - 461
  • [7] Bansal G., 2019, P AAAI C HUM COMP CR, V7, P2, DOI DOI 10.1609/HCOMP.V7I1.5285
  • [8] Bansal G., 2021, P CHI 21 CHI 21, V21, DOI [10, 10.1145/3411764.3445717, DOI 10.1145/3411764.3445717]
  • [9] A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare
    Barda, Amie J.
    Horvat, Christopher M.
    Hochheiser, Harry
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
  • [10] Baudel T., 2021, HUMAN COMPUTER INTER, V300, P320, DOI [10.1007/978-3-030-85613-722, DOI 10.1007/978-3-030-85613-722]