Enhancing human agency through redress in Artificial Intelligence Systems

被引:16
作者
Fanni, Rosanna [1 ]
Steinkogler, Valerie Eveline [2 ]
Zampedri, Giulia [2 ]
Pierson, Jo [3 ]
机构
[1] Ctr European Policy Studies CEPS, Brussels, Belgium
[2] Vrije Univ Brussel, EMJMD DCLead, Brussels, Belgium
[3] Vrije Univ Brussel, Imec SMIT, Brussels, Belgium
关键词
Human agency; Artificial intelligence; AI mediation; Contestability; Redress;
D O I
10.1007/s00146-022-01454-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI systems and their decisions, as well as to request redress. Currently, however, users structurally lack such mechanisms, which increases risks for vulnerable communities, for instance patients interacting with AI healthcare chatbots. To empower users in AI-mediated communication processes, this article introduces the concept of active human agency. We link our concept to contestability and redress mechanism examples and explain why these are necessary to strengthen active human agency. We argue that AI policy should introduce rights for users to swiftly contest or rectify an AI-enabled decision. This right would empower individual autonomy and strengthen fundamental rights in the digital age. We conclude by identifying routes for future theoretical and empirical research on active human agency in times of ubiquitous AI.
引用
收藏
页码:537 / 547
页数:11
相关论文
共 48 条
[1]  
AI4People, 2020, AI4PEOPLE 7 GLOB FRA
[2]  
Ala-Pietila P., 2020, The Assessment List for Trustworthy Artificial Intelligence
[3]  
Allen, 2018, BENEFITS DANGERS HAV
[4]  
Almada M, 2019, PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND LAW, ICAIL 2019, P2, DOI 10.1145/3322640.3326699
[5]  
[Anonymous], 2019, Ethics Guidelines for Trustworthy AI
[6]  
[Anonymous], 2020, ARTIF INTELL
[7]  
[Anonymous], Inception impact assessment
[8]   DO NOT DESPAIR - THERE IS LIFE AFTER CONSTRUCTIVISM [J].
BIJKER, WE .
SCIENCE TECHNOLOGY & HUMAN VALUES, 1993, 18 (01) :113-138
[9]  
Bird E., 2020, European Parliamentary Research Service, P634
[10]  
Blodgett J.G., 1995, J SERV MARK, V9, P31, DOI [10.1108/08876049510094487, DOI 10.1108/08876049510094487]