Legal and Technical Feasibility of the GDPR's Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas

被引:39
作者
Brkan, Maja [1 ]
Bonnet, Gregory [2 ]
机构
[1] Maastricht Univ, Fac Law, EU Law, Maastricht, Netherlands
[2] Normandy Univ, UNICAEN, ENSICAEN, CNRS,GREYC,Artificial Intelligence, Caen, France
关键词
DATA PROTECTION; ARTIFICIAL-INTELLIGENCE; SYSTEMS; SECRETS; AI;
D O I
10.1017/err.2020.10
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term "explainable AI (XAI)". Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.
引用
收藏
页码:18 / 50
页数:33
相关论文
共 135 条
[1]  
ABAZI, 2019, OFFICIAL SECRETS OVE
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[4]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[5]   Survey and critique of techniques for extracting rules from trained artificial neural networks [J].
Andrews, R ;
Diederich, J ;
Tickle, AB .
KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) :373-389
[6]  
[Anonymous], FRANCE HIGH SCH ASS
[7]  
[Anonymous], BRAZ S ART INT
[8]  
[Anonymous], PERCEPTRON INTRO COM
[9]  
[Anonymous], 2017, P 2 WORKSH HUM LOOP
[10]  
[Anonymous], ADV MODAL LOGIC