Algorithms and dehumanization: a definition and avoidance model

被引:1
作者
Schultz, Mario D. [1 ]
Clegg, Melanie [2 ]
Hofstetter, Reto [3 ]
Seele, Peter [4 ]
机构
[1] Franklin Univ Switzerland, Div Business & Econ, Via Ponte Tresa 29, CH-6924 Sorengo, Switzerland
[2] Univ Lausanne, Dept Mkt, Fac Business & Econ, Anthropole 3055, CH-1015 Lausanne, Switzerland
[3] Univ St Gallen, Dufourstr 40A, CH-9000 St Gallen, Switzerland
[4] USI Univ Svizzera Italiana, Eth & Commun Law Ctr ECLC, Black Bldg,Off 115,Level 1,Via G Buffi 13, CH-6900 Lugano, Switzerland
关键词
Algorithms; Dehumanization; Systematic interpretative review; Dehumanization avoidance model; ARTIFICIAL-INTELLIGENCE; DISCOURSE ETHICS; MACHINES; PEOPLE; RIGHTS; PATIENT; CYBORGS; ROBOTS; TRANSHUMANISM; AUTOMATION;
D O I
10.1007/s00146-024-02123-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dehumanization by algorithms raises important issues for business and society. Yet, these issues remain poorly understood due to the fragmented nature of the evolving dehumanization literature across disciplines, originating from colonialism, industrialization, post-colonialism studies, contemporary ethics, and technology studies. This article systematically reviews the literature on algorithms and dehumanization (n = 180 articles) and maps existing knowledge across several clusters that reveal its underlying characteristics. Based on the review, we find that algorithmic dehumanization is particularly problematic for human resource management and the future of work, managerial decision-making, consumer ethics, hard- and soft-law regulation, and basic values, including privacy and consumer rights. From the literature synthesis, we also derive the following definition of algorithmic dehumanization: the act of using algorithms and data in a way that results in the intentional or unintentional treatment of individuals and/or groups as less than fully human, thereby violating human rights, including liberty, equality, and dignity. Ultimately, we present a dehumanization avoidance model that serves as a structured research agenda and practical guide to address the challenges raised by algorithmic dehumanization. Thus, the model indicates promising pathways for future research at the intersection of AI and society and facilitates reflection on organizational processes that support improving corporate capabilities and managerial decision-making to avoid algorithmic dehumanization.
引用
收藏
页码:2191 / 2211
页数:21
相关论文
共 185 条
[1]   Lessons Learned About Autonomous AI: Finding a Safe, Efficacious, and Ethical Path Through the Development Process [J].
Abramoff, Michael D. ;
Tobey, Danny ;
Char, Danton S. .
AMERICAN JOURNAL OF OPHTHALMOLOGY, 2020, 214 :134-142
[2]   Posthuman perception of artificial intelligence in science fiction: an exploration of Kazuo Ishiguro's Klara and the Sun [J].
Ajeesh, A. K. ;
Rukmini, S. .
AI & SOCIETY, 2023, 38 (02) :853-860
[3]   Discrimination through optimization: How Facebook’s ad delivery can lead to biased outcomes [J].
Ali M. ;
Sapiezynski P. ;
Bogen M. ;
Korolova A. ;
Mislove A. ;
Rieke A. .
Proceedings of the ACM on Human-Computer Interaction, 2019, 3 (CSCW)
[4]   The Doctor-Patient Relationship With Artificial Intelligence [J].
Aminololama-Shakeri, Shadi ;
Lopez, Javier E. .
AMERICAN JOURNAL OF ROENTGENOLOGY, 2019, 212 (02) :308-310
[5]  
Ashcroft B., 1995, POSTCOLONIAL STUD
[6]   A Nudge Toward Universal Aspirin for Preeclampsia Prevention [J].
Ayala, Nina K. ;
Rouse, Dwight J. .
OBSTETRICS AND GYNECOLOGY, 2019, 133 (04) :725-728
[8]  
Baba D, 2020, HERMENEIA, P15
[9]   DIGNITY, LIBERTY, EQUALITY: A FUNDAMENTAL RIGHTS TRIANGLE OF CONSTITUTIONALISM [J].
Baer, Susanne .
UNIVERSITY OF TORONTO LAW JOURNAL, 2009, 59 (04) :417-468
[10]   Algorithm Overdependence: How the Use of Algorithmic Recommendation Systems Can Increase Risks to Consumer Well-Being [J].
Banker, Sachin ;
Khetani, Salil .
JOURNAL OF PUBLIC POLICY & MARKETING, 2019, 38 (04) :500-515