Fairness-aware machine learning engineering: how far are we?

被引:15
作者
Ferrara, Carmine [1 ]
Sellitto, Giulia [1 ]
Ferrucci, Filomena [1 ]
Palomba, Fabio [1 ]
De Lucia, Andrea [1 ]
机构
[1] Univ Salerno, Software Engn SeSa Lab, Salerno, Italy
基金
瑞士国家科学基金会;
关键词
Software fairness; Machine learning; Empirical software engineering; Survey study; Practitioners' perspective;
D O I
10.1007/s10664-023-10402-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Machine learning is part of the daily life of people and companies worldwide. Unfortunately, bias in machine learning algorithms risks unfairly influencing the decision-making process and reiterating possible discrimination. While the interest of the software engineering community in software fairness is rapidly increasing, there is still a lack of understanding of various aspects connected to fair machine learning engineering, i.e., the software engineering process involved in developing fairness-critical machine learning systems. Questions connected to the practitioners' awareness and maturity about fairness, the skills required to deal with the matter, and the best development phase(s) where fairness should be faced more are just some examples of the knowledge gaps currently open. In this paper, we provide insights into how fairness is perceived and managed in practice, to shed light on the instruments and approaches that practitioners might employ to properly handle fairness. We conducted a survey with 117 professionals who shared their knowledge and experience highlighting the relevance of fairness in practice, and the skills and tools required to handle it. The key results of our study show that fairness is still considered a second-class quality aspect in the development of artificial intelligence systems. The building of specific methods and development environments, other than automated validation tools, might help developers to treat fairness throughout the software lifecycle and revert this trend.
引用
收藏
页数:46
相关论文
共 81 条
[1]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[2]   Electronic survey methodology: A case study in reaching hard-to-involve Internet users [J].
Andrews, D ;
Nonnecke, B ;
Preece, J .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2003, 16 (02) :185-210
[3]  
Angwin J, 2022, ProPublica, P254
[4]  
[Anonymous], 2011, CNN Money
[5]   An Experimental Test of Mail Surveys as a Tool for Social Inquiry in Russia [J].
Avdeyeva, Olga A. ;
Matland, Richard E. .
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH, 2013, 25 (02) :173-194
[6]   Themis-ml: A Fairness-Aware Machine Learning Interface for End-To-End Discrimination Discovery and Mitigation [J].
Bantilan, Niels .
JOURNAL OF TECHNOLOGY IN HUMAN SERVICES, 2018, 36 (01) :15-30
[7]  
Barocas S., 2017, NIPS Tutorial, V1, P2017
[8]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[9]   Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned [J].
Bird, Sarah ;
Hutchinson, Ben ;
Kenthapadi, Krishnaram ;
Kiciman, Emre ;
Mitchell, Margaret .
COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, :1297-1298
[10]   Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline [J].
Biswas, Sumon ;
Rajan, Hridesh .
PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21), 2021, :981-993