Trust and ethics in AI

被引:0
作者
Hyesun Choung
Prabu David
Arun Ross
机构
[1] Michigan State University,
来源
AI & SOCIETY | 2023年 / 38卷
关键词
Trust in AI; Ethics requirements of AI; Public perceptions of AI;
D O I
暂无
中图分类号
学科分类号
摘要
With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization of trust with dispositional, institutional, and experiential trust each significantly correlated with trust dimensions. Along with trust in AI, we examine perceptions of the importance of seven ethics requirements of AI offered by the European Commission’s High-Level Expert Group. Then the association between ethics requirements and trust is evaluated through regression analysis. Findings suggest that the ethical requirement of societal and environmental well-being is positively associated with human-like trust in AI. Accountability and technical robustness are two other ethical requirements, which are significantly associated with functionality trust in AI. Further, trust in AI was observed to be higher than trust in other institutions. Drawing from our findings, we offer a multidimensional framework of trust that is inspired by ethical values to ensure the acceptance of AI as a trustworthy technology.
引用
收藏
页码:733 / 745
页数:12
相关论文
共 97 条
[1]  
Alarcon GM(2018)The effect of propensity to trust and perceptions of trustworthiness on trust behaviors in dyads Behav Res Methods 50 1906-1920
[2]  
Lyons JB(2020)In AI we trust? Perceptions about automated decision-making by artificial intelligence AI Soc 35 611-623
[3]  
Christensen JC(2020)Big tech and societal sustainability: an ethical framework AI Soc 35 829-840
[4]  
Araujo T(2020)A systematic review of algorithm aversion in augmented decision making J Behav Decis Mak 33 220-239
[5]  
Helberger N(2019)Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment J Trust Res 9 28-46
[6]  
Kruikemeier S(2003)Interpreting Dimensions of Consumer Trust in E-Commerce Inf Technol Manag 4 303-318
[7]  
de Vreese CH(2022)Trust in AI and its role in the acceptance of AI technologies Int J Hum-Comput Interact 92 909-927
[8]  
Arogyaswamy B(2007)Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance J Appl Psychol 144 114-126
[9]  
Burton JW(2015)Algorithm aversion: people erroneously avoid algorithms after seeing them err J Exp Psychol Gen 28 689-707
[10]  
Stein M-K(2019)A unified framework of five principles for AI in society Harv Data Sci Rev 3 76-97