Trust and ethics in AI

被引:20
作者
Choung, Hyesun [1 ]
David, Prabu [1 ]
Ross, Arun [1 ]
机构
[1] Michigan State Univ, E Lansing, MI 48824 USA
关键词
Trust in AI; Ethics requirements of AI; Public perceptions of AI; AUTOMATION; METAANALYSIS; FAMILIARITY; TECHNOLOGY; MODEL;
D O I
10.1007/s00146-022-01473-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI-human-like trust and functionality trust-and presents a multilevel conceptualization of trust with dispositional, institutional, and experiential trust each significantly correlated with trust dimensions. Along with trust in AI, we examine perceptions of the importance of seven ethics requirements of AI offered by the European Commission's High-Level Expert Group. Then the association between ethics requirements and trust is evaluated through regression analysis. Findings suggest that the ethical requirement of societal and environmental well-being is positively associated with human-like trust in AI. Accountability and technical robustness are two other ethical requirements, which are significantly associated with functionality trust in AI. Further, trust in AI was observed to be higher than trust in other institutions. Drawing from our findings, we offer a multidimensional framework of trust that is inspired by ethical values to ensure the acceptance of AI as a trustworthy technology.
引用
收藏
页码:733 / 745
页数:13
相关论文
共 52 条
  • [1] Abney K, 2012, INTELL ROBOT AUTON, P35
  • [2] The effect of propensity to trust and perceptions of trustworthiness on trust behaviors in dyads
    Alarcon, Gene M.
    Lyons, Joseph B.
    Christensen, James C.
    Klosterman, Samantha L.
    Bowers, Margaret A.
    Ryan, Tyler J.
    Jessup, Sarah A.
    Wynne, Kevin T.
    [J]. BEHAVIOR RESEARCH METHODS, 2018, 50 (05) : 1906 - 1920
  • [3] Allen C, 2012, INTELL ROBOT AUTON, P55
  • [4] In AI we trust? Perceptions about automated decision-making by artificial intelligence
    Araujo, Theo
    Helberger, Natali
    Kruikemeier, Sanne
    de Vreese, Claes H.
    [J]. AI & SOCIETY, 2020, 35 (03) : 611 - 623
  • [5] Big tech and societal sustainability: an ethical framework
    Arogyaswamy, Bernard
    [J]. AI & SOCIETY, 2020, 35 (04) : 829 - 840
  • [6] Borgesius Frederik Zuiderveen, 2018, Discrimination, artificial intelligence, and algorithmic decision-making
  • [7] A systematic review of algorithm aversion in augmented decision making
    Burton, Jason W.
    Stein, Mari-Klara
    Jensen, Tina Blegind
    [J]. JOURNAL OF BEHAVIORAL DECISION MAKING, 2020, 33 (02) : 220 - 239
  • [8] Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment
    Calhoun, Christopher S.
    Bobko, Philip
    Gallimore, Jennie J.
    Lyons, Joseph B.
    [J]. JOURNAL OF TRUST RESEARCH, 2019, 9 (01) : 28 - 46
  • [9] Chatila R, 2019, INTEL SYST CONTR AUT, V95, P11, DOI 10.1007/978-3-030-12524-0_2
  • [10] Interpreting Dimensions of Consumer Trust in E-Commerce
    Sandy C. Chen
    Gurpreet S. Dhillon
    [J]. Information Technology and Management, 2003, 4 (2-3) : 303 - 318