Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision Making

被引:6
作者
Lu Z. [1 ]
Wang D. [2 ]
Yin M. [1 ]
机构
[1] Purdue University, West Lafayette
[2] Northeastern University, Boston
基金
美国国家科学基金会;
关键词
appropriate reliance; human-AI interaction; Machine learning; second opinions;
D O I
10.1145/3653708
中图分类号
学科分类号
摘要
AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their overreliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making. © 2024 Copyright held by the owner/author(s).
引用
收藏
相关论文
共 118 条
[1]  
Ashktorab Z., Desmond M., Andres J., Muller M., Joshi N.N., Brachman M., Sharma A., Brimijoin K., Pan Q., Wolf C.T., Et al., AI-Assisted Human Labeling: Batching for Efficiency without Overreliance, Proceedings of the ACM on Human-Computer Interaction, 5, pp. 1-27, (2021)
[2]  
Austin P.C., An introduction to propensity score methods for reducing the effects of confounding in observational studies, Multivariate behavioral research, 46, 3, pp. 399-424, (2011)
[3]  
Bansal G., Nushi B., Kamar E., Lasecki W.S., Weld D.S., Horvitz E., Beyond accuracy: The role of mental models in human-AI team performance, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7, pp. 2-11, (2019)
[4]  
Bansal G., Nushi B., Kamar E., Weld D.S., Lasecki W.S., Horvitz E., Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff, Proceedings of the AAAI Conference on Artificial Intelligence, 33, pp. 2429-2437, (2019)
[5]  
Bansal G., Wu T., Zhou J., Fok R., Nushi B., Kamar E., Ribeiro M.T., Weld D., Does the whole exceed its parts? the effect of ai explanations on complementary team performance, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-16, (2021)
[6]  
Betzer A., Harries J.P., How online discussion board activity affects stock trading: the case of GameStop, Financial markets and portfolio management, pp. 1-30, (2022)
[7]  
Bonaccio S., Dalal R.S., Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences, Organizational behavior and human decision processes, 101, 2, pp. 127-151, (2006)
[8]  
Bucinca Z., Malaya M.B., Gajos K.Z., To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making, Proceedings of the ACM on Human-Computer Interaction, 5, pp. 1-21, (2021)
[9]  
Budescu D.V., Rantilla A.K., Confidence in aggregation of expert opinions, Acta psychologica, 104, 3, pp. 371-398, (2000)
[10]  
Bussone A., Stumpf S., O'Sullivan D., The role of explanations on trust and reliance in clinical decision support systems, 2015 international conference on healthcare informatics, pp. 160-169, (2015)