Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment

被引:0
作者
Pareek, Saumya [1 ]
van Berkel, Niels [2 ]
Velloso, Eduardo [1 ]
Goncalves, Jorge [1 ]
机构
[1] The University of Melbourne, Australia
[2] Aalborg University, Denmark
关键词
artificial intelligence; conceptualisation validations; credibility assessment; human-AI interaction; misinformation; reliance; trust;
D O I
10.1145/3686922
中图分类号
学科分类号
摘要
As misinformation increasingly proliferates on social media platforms, it has become crucial to explore how to best convey automated news credibility assessments to end-users, and foster trust in fact-checking AIs. In this paper, we investigate how model-agnostic, natural language explanations influence trust and reliance on a fact-checking AI. We construct explanations from four Conceptualisation Validations (CVs) – namely consensual, expert, internal (logical), and empirical – which are foundational units of evidence that humans utilise to validate and accept new information. Our results show that providing explanations significantly enhances trust in AI, even in a fact-checking context where influencing pre-existing beliefs is often challenging, with different CVs causing varying degrees of reliance. We find consensual explanations to be the least influential, with expert, internal, and empirical explanations exerting twice as much influence. However, we also find that users could not discern whether the AI directed them towards the truth, highlighting the dual nature of explanations to both guide and potentially mislead. Further, we uncover the presence of automation bias and aversion during collaborative fact-checking, indicating how users’ previously established trust in AI can moderate their reliance on AI judgements. We also observe the manifestation of a ‘boomerang’/backfire effect often seen in traditional corrections to misinformation, with individuals who perceive AI as biased or untrustworthy doubling down and reinforcing their existing (in)correct beliefs when challenged by the AI. We conclude by presenting nuanced insights into the dynamics of user behaviour during AI-based fact-checking, offering important lessons for social media platforms. © 2024 Copyright held by the owner/author(s).
引用
收藏
相关论文
共 124 条
  • [51] Johnson T.J., Kaye B.K., Bichard S.L., Joann Wong W., Every Blog Has Its Day: Politically-interested Internet Users’ Perceptions of Blog Credibility, Journal of Computer-Mediated Communication, 13, 1, pp. 100-122, (2007)
  • [52] Mo Jones-Jang S., Park Y.J., How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability, Journal of Computer-Mediated Communication, 28, 1, (2023)
  • [53] Karlova N.A., Fisher K.E., A social diffusion model of misinformation and disinformation for understanding human information behaviour, Information Research, 18, (2013)
  • [54] Khadpe P., Krishna R., Fei-Fei L., Hancock J.T., Bernstein M.S., Conceptual Metaphors Impact Perceptions of Human-AI Collaboration, Proceedings of the ACM on Human-Computer Interaction 4, CSCW2, pp. 1631-16326, (2020)
  • [55] Kulesza T., Stumpf S., Burnett M., Yang S., Kwan I., Wong W.-K., Too much, too little, or just right? Ways explanations impact end users’ mental models, 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3-10, (2013)
  • [56] Korber M., Theoretical considerations and development of a questionnaire to measure trust in automation, (2018)
  • [57] Lai V., Tan C., On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection, Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*’19), pp. 29-38, (2019)
  • [58] Lazer D.M.J., Baum M.A., Benkler Y., Berinsky A.J., Greenhill K.M., Menczer F., Metzger M.J., Nyhan B., Pennycook G., Rothschild D., Schudson M., Sloman S.A., Sunstein C.R., Thorson E.A., Watts D.J., Zittrain J.L., The science of fake news, Science, 359, 6380, pp. 1094-1096, (2018)
  • [59] Lee J.D., See K.A., Trust in Automation: Designing for Appropriate Reliance, Human Factors, 46, 1, pp. 50-80, (2004)
  • [60] Jung J., Goel S., Skeem J., The limits of human predictions of recidivism, Science Advances, 6, 7, (2020)