共 50 条
Trust in AI: progress, challenges, and future directions
被引:6
|作者:
Afroogh, Saleh
[1
]
Akbari, Ali
[2
]
Malone, Emmie
[3
]
Kargar, Mohammadali
[4
]
Alambeigi, Hananeh
[5
]
机构:
[1] Univ Texas Austin, Urban Informat Lab, Austin, TX 78712 USA
[2] Stanford Univ, Sch Med, 450 Serra Mall, Stanford, CA 94305 USA
[3] Lone Star Coll, Dept Philosophy, Houston, TX 77088 USA
[4] Texas A&M Univ, Dept Mech Engn, College Stn, TX USA
[5] Texas A&M Univ, Dept Ind & Syst Engn, College Stn, TX USA
来源:
HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS
|
2024年
/
11卷
/
01期
关键词:
ARTIFICIAL-INTELLIGENCE;
HEALTH-CARE;
SYSTEMS;
INFORMATION;
BLOCKCHAIN;
AUTONOMY;
PRIVACY;
EMPATHY;
TRANSPARENCY;
AUTOMATION;
D O I:
10.1057/s41599-024-04044-8
中图分类号:
C [社会科学总论];
学科分类号:
03 ;
0303 ;
摘要:
The increasing use of artificial intelligence (AI) systems in our daily lives through various applications, services, and products highlights the significance of trust and distrust in AI from a user perspective. AI-driven systems have significantly diffused into various aspects of our lives, serving as beneficial "tools" used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust and distrust in AI serve as regulators and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, a variety of studies focused on the different dimensions of trust and distrust in AI and its relevant considerations. In this systematic literature review, after conceptualizing trust in the current AI literature, we will investigate trust in different types of human-machine interaction and its impact on technology acceptance in different domains. Additionally, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, along with some trustworthy measurements. Moreover, we examine major trust-breakers in AI (e.g., autonomy and dignity threats) and trustmakers; and propose some future directions and probable solutions for the transition to a trustworthy AI.
引用
收藏
页数:30
相关论文