Using sensitive data to prevent discrimination by artificial intelligence: Does the GDPR need a new exception?

被引:19
作者
van Bekkum, Marvin [1 ]
Borgesius, Frederik Zuiderveen
机构
[1] Radboud Univ Nijmegen, Interdisciplinary Hub Digitalizat & Soc iHub, Erasmuslaan 1, NL-6525 GE Nijmegen, Netherlands
关键词
Non-discrimination; Data protection; AI; Automated decision-making; Artificial intelligence; Special categories of data; AI fairness testing; AI auditing; discrimination;
D O I
10.1016/j.clsr.2022.105770
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Organisations can use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. To illustrate, an AI system could reject applications of people with a certain ethnicity, while the organisation did not plan such ethnicity discrimination. But in Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity: the organisation may not know the applicants' ethnicity. In principle, the GDPR bans the use of certain 'special categories of data' (sometimes called 'sensitive data'), which include data on ethnicity, religion, and sexual preference. The proposal for an AI Act of the European Commission includes a provision that would enable organisations to use special categories of data for auditing their AI systems. This paper asks whether the GDPR's rules on special categories of personal data hinder the prevention of AI-driven discrimination. We argue that the GDPR does prohibit such use of special category data in many circumstances. We also map out the arguments for and against creating an exception to the GDPR's ban on using special categories of personal data, to enable preventing discrimination by AI systems. The paper discusses European law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and nondiscrimination policy. (c) 2022 Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
引用
收藏
页数:12
相关论文
共 70 条
  • [1] Directly Discriminatory Algorithms
    Adams-Prassl, Jeremias
    Binns, Reuben
    Kelly-Lyth, Aislinn
    [J]. MODERN LAW REVIEW, 2023, 86 (01) : 144 - 175
  • [2] It is not (only) about privacy: How multi-party computation redefines control, trust, and risk in data sharing
    Agahari, Wirawan
    Ofe, Hosea
    de Reuver, Mark
    [J]. ELECTRONIC MARKETS, 2022, 32 (03) : 1577 - 1602
  • [3] AI HLEG, 2019, DEF AI MAIN CAP DISC
  • [4] Al-Zubaidi Y, 2020, EUROPEAN EQUALITY LA, V2, P65
  • [5] Alidadi K, 2017, EUROPEAN EQUALITY LA, V2, P21
  • [6] Andrus M, 2021, ARXIV
  • [7] [Anonymous], JERS
  • [8] [Anonymous], 2010, SAINT MART PERS DAT
  • [9] [Anonymous], PRINC 5
  • [10] [Anonymous], 2018, The UK Data Protection Act 2018