Trust Indicators and Explainable AI: A Study on User Perceptions

被引:4
|
作者
Ribes, Delphine [1 ]
Henchoz, Nicolas [1 ]
Portier, Helene [1 ]
Defayes, Lara [1 ]
Thanh-Trung Phan [4 ,5 ]
Gatica-Perez, Daniel [4 ,5 ]
Sonderegger, Andreas [2 ,3 ]
机构
[1] Ecole Polytech Fed Lausanne, EPFL ECAL Lab, Lausanne, Switzerland
[2] Bern Univ Appl Sci, Bern, Switzerland
[3] Univ Fribourg, Fribourg, Switzerland
[4] Idiap Res Inst, Martigny, Switzerland
[5] Ecole Polytech Fed Lausanne, LIDIAP STI, Lausanne, Switzerland
来源
HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT II | 2021年 / 12933卷
关键词
Trust indicators; Fake news; Transparency; Design; Explainable AI; XAI; Understandable AI; SOURCE CREDIBILITY; NEWS;
D O I
10.1007/978-3-030-85616-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, search engines, social media or news aggregators are the preferred services for news access. Aggregation is mostly based on artificial intelligence technologies raising a new challenge: Trust has been ranked as the most important factor for media business. This paper reports findings of a study evaluating the influence of manipulations of interface design and information provided in the context of eXplainable Artificial Intelligence (XAI) on user perception and in the context of news content aggregators. In an experimental online study, various layouts and scenarios have been developed, implemented and tested with 266 participants. Measures of trust, understanding and preference were recorded. Results showed no influence of the factors on trust. However, data indicates that the influence of the layout, for example implicit integration of media source through layout structuration has a significant effect on perceived importance to cite the source of a media. Moreover, the amount of information presented to explain the AI showed a negative influence on user understanding. This highlights the importance and difficulty of making XAI understandable for its users.
引用
收藏
页码:662 / 671
页数:10
相关论文
共 50 条
  • [1] Explainable AI and stakes in medicine: A user study
    Baron, Sam
    Latham, Andrew J.
    Varga, Somogy
    ARTIFICIAL INTELLIGENCE, 2025, 340
  • [2] User Perceptions and Trust of Explainable Machine Learning Fake News Detectors
    Shin, Jieun
    Chan-Olmsted, Sylvia
    INTERNATIONAL JOURNAL OF COMMUNICATION, 2023, 17 : 518 - 540
  • [3] AI Trust: Can Explainable AI Enhance Warranted Trust?
    Duarte, Regina de Brito
    Correia, Filipa
    Arriaga, Patricia
    Paiva, Ana
    HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES, 2023, 2023
  • [4] The role of user feedback in enhancing understanding and trust in counterfactual explanations for explainable AI
    Suffian, Muhammad
    Kuhl, Ulrike
    Bogliolo, Alessandro
    Alonso-Moral, Jose M.
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2025, 199
  • [5] Explainable AI: introducing trust and comprehensibility to AI engineering
    Burkart, Nadia
    Brajovic, Danilo
    Huber, Marco F.
    AT-AUTOMATISIERUNGSTECHNIK, 2022, 70 (09) : 787 - 792
  • [6] Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance
    Hoffman, Robert R.
    Mueller, Shane T.
    Klein, Gary
    Litman, Jordan
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [7] User Study on the Effects Explainable AI Visualizations on Non-experts
    Schulze-Weddige, Sophia
    Zylowski, Thorsten
    ARTSIT, INTERACTIVITY AND GAME CREATION: CREATIVE HERITAGE NEW PERSPECTIVES FROM MEDIA ARTS AND ARTIFICIAL INTELLIGENCE, 2022, 422 : 457 - 467
  • [8] Exploration of Explainable AI for Trust Development on Human-AI Interaction
    Bernardo, Ezekiel L.
    Seva, Rosemary R.
    PROCEEDINGS OF 2023 6TH ARTIFICIAL INTELLIGENCE AND CLOUD COMPUTING CONFERENCE, AICCC 2023, 2023, : 238 - 246
  • [9] When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
    Chen, Cheng
    Liao, Mengqi
    Sundar, S. Shyam
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [10] A Study on Trust Building in AI Systems Through User Commitment
    Ogawa, Ryuichi
    Shima, Shigeyoshi
    Takemura, Toshihiko
    Fukuzumi, Shin-ichi
    HUMAN INTERFACE AND THE MANAGEMENT OF INFORMATION, HIMI 2023, PT I, 2023, 14015 : 557 - 567