Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment

被引:0
作者
Pareek, Saumya [1 ]
van Berkel, Niels [2 ]
Velloso, Eduardo [1 ]
Goncalves, Jorge [1 ]
机构
[1] The University of Melbourne, Australia
[2] Aalborg University, Denmark
关键词
artificial intelligence; conceptualisation validations; credibility assessment; human-AI interaction; misinformation; reliance; trust;
D O I
10.1145/3686922
中图分类号
学科分类号
摘要
As misinformation increasingly proliferates on social media platforms, it has become crucial to explore how to best convey automated news credibility assessments to end-users, and foster trust in fact-checking AIs. In this paper, we investigate how model-agnostic, natural language explanations influence trust and reliance on a fact-checking AI. We construct explanations from four Conceptualisation Validations (CVs) – namely consensual, expert, internal (logical), and empirical – which are foundational units of evidence that humans utilise to validate and accept new information. Our results show that providing explanations significantly enhances trust in AI, even in a fact-checking context where influencing pre-existing beliefs is often challenging, with different CVs causing varying degrees of reliance. We find consensual explanations to be the least influential, with expert, internal, and empirical explanations exerting twice as much influence. However, we also find that users could not discern whether the AI directed them towards the truth, highlighting the dual nature of explanations to both guide and potentially mislead. Further, we uncover the presence of automation bias and aversion during collaborative fact-checking, indicating how users’ previously established trust in AI can moderate their reliance on AI judgements. We also observe the manifestation of a ‘boomerang’/backfire effect often seen in traditional corrections to misinformation, with individuals who perceive AI as biased or untrustworthy doubling down and reinforcing their existing (in)correct beliefs when challenged by the AI. We conclude by presenting nuanced insights into the dynamics of user behaviour during AI-based fact-checking, offering important lessons for social media platforms. © 2024 Copyright held by the owner/author(s).
引用
收藏
相关论文
共 124 条
  • [1] Allcott H., Gentzkow M., Social Media and Fake News in the 2016 Election, Journal of Economic Perspectives, 31, 2, pp. 211-236, (2017)
  • [2] Amazeen M., Krishna A., Correcting Vaccine Misinformation: Recognition and Effects of Source Type on Misinformation via Perceived Motivations and Credibility, (2020)
  • [3] Anik A.I., Bunt A., Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI’21), pp. 1-13, (2021)
  • [4] Bansal G., Nushi B., Kamar E., Lasecki W.S., Weld D.S., Horvitz E., Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7, pp. 2-11, (2019)
  • [5] Bansal G., Wu T., Zhou J., Fok R., Nushi B., Kamar E., Ribeiro M.T., Weld D., Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI’21), pp. 1-16, (2021)
  • [6] Bates D., Machler M., Bolker B., Walker S., Fitting Linear Mixed-Effects Models Using lme4, Journal of Statistical Software, 67, pp. 1-48, (2015)
  • [7] Bojjireddy S., Chun S.A., Geller J., Machine Learning Approach to Detect Fake News, Misinformation in COVID-19 Pandemic, DG.O2021: The 22nd Annual International Conference on Digital Government Research (DG.O’21), pp. 575-578, (2021)
  • [8] Bolker B.M., Brooks M.E., Clark C.J., Geange S.W., Poulsen J.R., Henry M., Stevens H., White J.-S.S., Generalized linear mixed models: a practical guide for ecology and evolution, Trends in Ecology & Evolution, 24, 3, pp. 127-135, (2009)
  • [9] Bossen C., Pine K.H., Batman and Robin in Healthcare Knowledge Work: Human-AI Collaboration by Clinical Documentation Integrity Specialists, ACM Transactions on Computer-Human Interaction, 30, 2, pp. 261-2629, (2023)
  • [10] Braun V., Clarke V., Using thematic analysis in psychology, Qualitative Research in Psychology, 3, 2, pp. 77-101, (2006)