AI Systems and Respect for Human Autonomy

被引:40
作者
Laitinen, Arto [1 ]
Sahlgren, Otto [1 ]
机构
[1] Tampere Univ, Fac Social Sci, Tampere, Finland
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2021年 / 4卷
基金
芬兰科学院;
关键词
autonomy; self-determination; respect; artificial intelligence; human-centered AI; ought to be; sociotechnical base;
D O I
10.3389/frai.2021.705164
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other's autonomy, or how they can respect each other's autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person's autonomy? We argue for a philosophical view according to which AI systems-while not moral agents or bearers of duties, and unable to literally respect or disrespect-are governed by so-called "ought-to-be norms." This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
引用
收藏
页数:14
相关论文
共 55 条
[1]  
Anderson J, 2018, PEW RES CTR
[2]  
Beauchamp T., 2019, STANFORD ENCY PHILOS
[3]  
Beauchamp T. L., 2013, PRINCIPLES BIOMEDICA, V7th
[4]   'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions [J].
Binns, Reuben ;
Van Kleek, Max ;
Veale, Michael ;
Lyngs, Ulrik ;
Zhao, Jun ;
Shadbolt, Nigel .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[5]  
Calvo R.A., 2020, PSS, V140, P31, DOI [DOI 10.1007/978-3-030-50585-1_2, 10.1007/978-3-030-50585-12, DOI 10.1007/978-3-030-50585-12]
[6]  
Caramiaux B., 2020, RES CULT COMMITTEE U
[7]   Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents [J].
Coeckelbergh, Mark .
AI & SOCIETY, 2009, 24 (02) :181-189
[8]  
Danaher J., 2018, Philosophy Technology, V31, P629, DOI DOI 10.1007/S13347-018-0317-3
[9]  
Eidelson Benjamin, 2015, Discrimination and Disrespect
[10]  
Eubanks Virginia, 2017, AUTOMATING INEQUALIT