Whom to Trust, How and Why: Untangling Artificial Intelligence Ethics Principles, Trustworthiness, and Trust

被引:4
作者
Duenser, Andreas [1 ]
Douglas, David M. [2 ]
机构
[1] Commonwealth Sci & Ind Res Org, Hobart, Tas 7005, Australia
[2] Commonwealth Sci & Ind Res Org, Brisbane, Qld 4001, Australia
关键词
Artificial intelligence; Ethics; Intelligent systems; Control systems; Stakeholders; Automation; Training data;
D O I
10.1109/MIS.2023.3322586
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we present an overview of the literature on trust in artificial intelligence (AI) and AI trustworthiness and argue for distinguishing these concepts more clearly and gathering more empirically evidence on what contributes to people's trusting behaviors. We discuss that trust in AI involves not only reliance on the system itself but also trust in the system's developers. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system's trustworthiness is not as abundant or not that clear. AI systems should be recognized as sociotechnical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognizing these nuances, "trust in AI" and "trustworthy AI" risk becoming nebulous terms for any desirable feature for AI systems.
引用
收藏
页码:19 / 26
页数:8
相关论文
共 20 条
  • [1] Alarcon G. M., 2023, P 56 HAW INT C SYST, P1095
  • [2] [Anonymous], Australia's AI ethics principles
  • [3] A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective
    Bach, Tita Alissa
    Khan, Amna
    Hallock, Harry
    Beltrao, Gabriela
    Sousa, Sonia
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (05) : 1251 - 1266
  • [4] Fjeld Jessica, 2020, BERKMAN KLEIN CTR RE, V2020, DOI DOI 10.2139/SSRN.3518482
  • [5] Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
    Jacovi, Alon
    Marasovic, Ana
    Miller, Tim
    Goldberg, Yoav
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 624 - 635
  • [6] AI, agency and responsibility: the VW fraud case and beyond
    Johnson, Deborah G.
    Verdicchio, Mario
    [J]. AI & SOCIETY, 2019, 34 (03) : 639 - 647
  • [7] The fallacy of inscrutability
    Kroll, Joshua A.
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2018, 376 (2133):
  • [8] Trust in automation: Designing for appropriate reliance
    Lee, JD
    See, KA
    [J]. HUMAN FACTORS, 2004, 46 (01) : 50 - 80
  • [9] The Mythos of Model Interpretability
    Lipton, Zachary C.
    [J]. COMMUNICATIONS OF THE ACM, 2018, 61 (10) : 36 - 43
  • [10] Lu QH, 2022, Arxiv, DOI arXiv:2209.04963