Taking the Person Seriously: Ethically Aware IS Research in the Era of Reinforcement Learning-Based Personalization

被引:5
作者
Greene, Travis [1 ]
Shmueli, Galit [2 ]
Ray, Soumya [2 ]
机构
[1] Copenhagen Business Sch, Dept Digitalizat, Frederiksberg, Denmark
[2] Natl Tsing Hua Univ, Inst Serv Sci, Hsinchu, Taiwan
来源
JOURNAL OF THE ASSOCIATION FOR INFORMATION SYSTEMS | 2023年 / 24卷 / 06期
关键词
Personalization; Reinforcement Learning; Sociotechnical; Data Protection; AI Ethics; Digital Platforms; ARTIFICIAL-INTELLIGENCE; PRINCIPLES; AUTONOMY; BEHAVIOR; PERSUASION; CHALLENGES; FRAMEWORK; FACEBOOK; IDENTITY; PRIVACY;
D O I
10.17705/1jais.00800
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advances in reinforcement learning and implicit data collection on large-scale commercial platforms mark the beginning of a new era of personalization aimed at the adaptive control of human user environments. We present five emergent features of this new paradigm of personalization that endanger persons and societies at scale and analyze their potential to reduce personal autonomy, destabilize social and political systems, and facilitate mass surveillance and social control, among other concerns. We argue that current data protection laws, most notably the European Union's General Data Protection Regulation, are limited in their ability to adequately address many of these issues. Nevertheless, we believe that IS researchers are well-situated to engage with and investigate this new era of personalization. We propose three distinct directions for ethically aware reinforcement learning-based personalization research uniquely suited to the strengths of IS researchers across the sociotechnical spectrum.
引用
收藏
页码:1527 / 1561
页数:36
相关论文
共 244 条
[41]   On Consequentialism and Fairness [J].
Card, Dallas ;
Smith, Noah A. .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2020, 3
[42]   How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility [J].
Chaney, Allison J. B. ;
Stewart, Brandon M. ;
Engelhardt, Barbara E. .
12TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS), 2018, :224-232
[43]   Top-K Off-Policy Correction for a REINFORCE Recommender System [J].
Chen, Minmin ;
Beutel, Alex ;
Covington, Paul ;
Jain, Sagar ;
Belletti, Francois ;
Chi, Ed H. .
PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, :456-464
[44]  
Christiano PF, 2017, ADV NEUR IN, V30
[45]  
CHRISTMAN J, 1991, CAN J PHILOS, V21, P1
[46]  
Churchill E. F., 2013, interactions, V20, P12, DOI [DOI 10.1145/2512050.2504847, 10.1145/2504847, DOI 10.1145/2504847]
[47]  
Citron D, 2007, WASH U L REV, V85, P1249
[48]   A conceptual framework for adaptive preventive interventions [J].
Collins, LM ;
Murphy, SA ;
Bierman, KL .
PREVENTION SCIENCE, 2004, 5 (03) :185-196
[49]  
Coors C., 2010, German Law Journal, V11, P527
[50]   Reinforcement learning for intelligent healthcare applications: A survey [J].
Coronato, Antonio ;
Naeem, Muddasar ;
De Pietro, Giuseppe ;
Paragliola, Giovanni .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2020, 109