An Experiment on the Impact of Information on the Trust in Artificial Intelligence

被引:0
|
作者
Meyer, Julien [1 ]
Remisch, David [1 ]
机构
[1] Ryerson Univ, Toronto, ON, Canada
来源
HCI IN BUSINESS, GOVERNMENT AND ORGANIZATIONS, HCIBGO 2021 | 2021年 / 12783卷
关键词
Artificial intelligence; Pathology; Trust; Reliance; INTERNATIONAL-SOCIETY; AUTOMATION; PATHOLOGY; BIAS;
D O I
10.1007/978-3-030-77750-0_39
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
Artificial intelligence (AI) has made considerable progress in a variety of fields and is suggested to do as well or better than many experts, creating great expectations about its potential to improve decision-making. While much progress has been made in refining the accuracy of algorithms, much remains to determine on how these algorithms will influence decision-makers, especially in life or death decisions such as in medicine. In such fields, human experts will remain for the foreseeable future the ultimate decision-makers. Literature suggests that reliance on algorithms by decision-makers may be influenced by the accuracy of algorithm and by the information on how the algorithm reached its conclusions. The objective of this paper is to determine the propensity to influence pathologists' decision-making using algorithmic expertise and information on AI algorithm accuracy and model interpretability. To test our hypotheses, we will conduct an online, quasi-experimental survey study with 120 respondent pathologists. Each participant will provided with a series of prostate cancer samples and asked to assess the Gleason grade. Our hypothesis is that increasing the level of information will lead to increased reliance in automated systems. This research will provide insight into trust in AI: first, the extent to which pathologists trust AI advice; second, the extent to which each type of information contributes to trust.
引用
收藏
页码:600 / 607
页数:8
相关论文
共 50 条
  • [1] Impact of artificial intelligence on pathologists' decisions: an experiment
    Meyer, Julien
    Khademi, April
    Tetu, Bernard
    Han, Wencui
    Nippak, Pria
    Remisch, David
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2022, 29 (10) : 1688 - 1695
  • [2] Can We Trust Artificial Intelligence?
    Christian Budnik
    Philosophy & Technology, 2025, 38 (1)
  • [3] SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?
    Sutrop, Margit
    TRAMES-JOURNAL OF THE HUMANITIES AND SOCIAL SCIENCES, 2019, 23 (04): : 499 - 522
  • [4] Regulating for trust: Can law establish trust in artificial intelligence?
    Tamo-Larrieux, Aurelia
    Guitton, Clement
    Mayer, Simon
    Lutz, Christoph
    REGULATION & GOVERNANCE, 2024, 18 (03) : 780 - 801
  • [5] Attachment and trust in artificial intelligence
    Gillath, Omri
    Ai, Ting
    Branicky, Michael S.
    Keshmiri, Shawn
    Davison, Robert B.
    Spaulding, Ryan
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 115
  • [6] Artificial intelligence (AI) for supply chain collaboration: implications on information sharing and trust
    Weisz, Eric
    Herold, David M.
    Ostern, Nadine Kathrin
    Payne, Ryan
    Kummer, Sebastian
    ONLINE INFORMATION REVIEW, 2025, 49 (01) : 164 - 181
  • [7] Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians
    Asan, Onur
    Bayrak, Alparslan Emrah
    Choudhury, Avishek
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (06)
  • [8] Minding the Gap: Tools for Trust Engineering of Artificial Intelligence
    Dorton, Stephen L.
    Stanley, Jeff C.
    ERGONOMICS IN DESIGN, 2024,
  • [9] Trust in Artificial Intelligence: Meta-Analytic Findings
    Kaplan, Alexandra D.
    Kessler, Theresa T.
    Brill, J. Christopher
    Hancock, P. A.
    HUMAN FACTORS, 2023, 65 (02) : 337 - 359
  • [10] Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
    Jacovi, Alon
    Marasovic, Ana
    Miller, Tim
    Goldberg, Yoav
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 624 - 635