Policy advice and best practices on bias and fairness in AI

被引:24
作者
Alvarez, Jose M. [1 ,2 ]
Colmenarejo, Alejandra Bringas [3 ]
Elobaid, Alaa [4 ,5 ]
Fabbrizzi, Simone [4 ,6 ,7 ]
Fahimi, Miriam [8 ]
Ferrara, Antonio [9 ,10 ]
Ghodsi, Siamak [5 ,6 ]
Mougan, Carlos [3 ]
Papageorgiou, Ioanna [6 ]
Reyero, Paula [11 ]
Russo, Mayra [6 ]
Scott, Kristen M. [12 ]
State, Laura [1 ,2 ]
Zhao, Xuan [13 ]
Ruggieri, Salvatore [2 ]
机构
[1] Scuola Normale Super Pisa, Pisa, Italy
[2] Univ Pisa, Pisa, Italy
[3] Univ Southampton, Southampton, England
[4] CERTH, Thessaloniki, Greece
[5] Free Univ Berlin, Berlin, Germany
[6] Leibniz Univ Hannover, Hannover, Germany
[7] Free Univ Bozen Bolzano, Bolzano, Italy
[8] Univ Klagenfurt, Klagenfurt, Austria
[9] GESIS Leibniz Inst, Mannheim, Germany
[10] Rhein Westfal TH Aachen, Aachen, Germany
[11] Open Univ, Milton Keynes, England
[12] Katholieke Univ Leuven, Leuven, Belgium
[13] SCHUFA Holding AG, Wiesbaden, Germany
基金
欧盟地平线“2020”;
关键词
Artificial Intelligence; Bias; Fairness; Policy advice; Best practices; ALGORITHMIC FAIRNESS; ARTIFICIAL-INTELLIGENCE; DISCRIMINATION; IMPACT; STRATEGIES;
D O I
10.1007/s10676-024-09746-w
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird's-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods and resources, and the main policies on bias in AI, with the aim of providing such a bird's-eye guidance for both researchers and practitioners. The second objective of the paper is to contribute to the policy advice and best practices state-of-the-art by leveraging from the results of the NoBIAS research project. We present and discuss a few relevant topics organized around the NoBIAS architecture, which is made up of a Legal Layer, focusing on the European Union context, and a Bias Management Layer, focusing on understanding, mitigating, and accounting for bias.
引用
收藏
页数:26
相关论文
共 315 条
[221]   Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness [J].
Ovalle, Anaelia ;
Subramonian, Arjun ;
Gautam, Vagrant ;
Gee, Gilbert ;
Chang, Kai-Wei .
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023, 2023, :496-511
[222]   Queer In AI: A Case Study in Community-Led Participatory AI [J].
Ovalle, Anaelia ;
Subramonian, Arjun ;
Singh, Ashwin ;
Voelcker, Claas ;
Sutherland, Danica J. ;
Locatelli, Davide ;
Breznik, Eva ;
Klubicka, Filip ;
Yuan, Hang ;
Hetvi, J. ;
Zhang, Huan ;
Shriram, Jaidev ;
Lehman, Kruno ;
Soldaini, Luca ;
Sap, Maarten ;
Deisenroth, Marc Peter ;
Pacheco, Maria Leonor ;
Ryskina, Maria ;
Mundt, Martin ;
Agarwal, Milind ;
McLean, Nyx ;
Xu, Pan ;
Pranav, A. ;
Korpan, Raj ;
Ray, Ruchira ;
Mathew, Sarah ;
Arora, Sarthak ;
John, S. T. ;
Anand, Tanvi ;
Agrawal, Vishakha ;
Agnew, William ;
Long, Yanan ;
Wang, Zijie J. ;
Talat, Zeerak ;
Ghosh, Avijit ;
Dennler, Nathaniel ;
Noseworthy, Michael ;
Jha, Sharvani ;
Baylor, Emi ;
Joshi, Aditya ;
Bilenko, Natalia Y. ;
McNamara, Andrew ;
Gontijo-Lopes, Raphael ;
Markham, Alex ;
Dong, Evyn ;
Kay, Jackie ;
Saraswat, Manu ;
Vytla, Nikhil ;
Stark, Luke .
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, :1882-1895
[223]   Six Human-Centered Artificial Intelligence Grand Challenges [J].
Ozmen Garibay, Ozlem ;
Winslow, Brent ;
Andolina, Salvatore ;
Antona, Margherita ;
Bodenschatz, Anja ;
Coursaris, Constantinos ;
Falco, Gregory ;
Fiore, Stephen M. ;
Garibay, Ivan ;
Grieman, Keri ;
Havens, John C. ;
Jirotka, Marina ;
Kacorri, Hernisa ;
Karwowski, Waldemar ;
Kider, Joe ;
Konstan, Joseph ;
Koon, Sean ;
Lopez-Gonzalez, Monica ;
Maifeld-Carucci, Iliana ;
Mcgregor, Sean ;
Salvendy, Gavriel ;
Shneiderman, Ben ;
Stephanidis, Constantine ;
Strobel, Christina ;
Ten Holter, Carolyn ;
Xu, Wei .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023, 39 (03) :391-437
[224]   A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making Systems [J].
Pagan, Nicole ;
Baumann, Joachim ;
Elokda, Ezzat ;
De Pasquale, Giulia ;
Bolognani, Saverio ;
Hannak, Aniko .
PROCEEDINGS OF 2023 ACM CONFERENCE ON EQUITY AND ACCESS IN ALGORITHMS, MECHANISMS, AND OPTIMIZATION, EAAMO 2023, 2023,
[225]  
Pariser Eli, 2011, FILTER BUBBLE WHAT I, DOI DOI 10.3139/9783446431164
[226]  
Parmar M, 2023, 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, P1779
[227]   Problem Formulation and Fairness [J].
Passi, Samir ;
Barocas, Solon .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :39-48
[228]  
Pearl J., 2009, Causality: Models, Reasoning, and Inference, V2nd
[229]  
Pedreschi D, 2008, P 14 ACM SIGKDD INT, P560, DOI [DOI 10.1145/1401890.1401959, 10.1145/1401890.1401959]
[230]  
Pedreschi D., 2012, P ACM S APPL COMPUTI, P126, DOI DOI 10.1145/2245276.2245303