Machine learning's limitations in avoiding automation of bias

被引:10
作者
Varona, Daniel [1 ]
Lizama-Mue, Yadira [1 ]
Suarez, Juan Luis [1 ]
机构
[1] Western Univ, CulturePlex Lab, 1151 Richmond St, London, ON N6A 3K7, Canada
关键词
Machine learning; Bias; Bias automation; Artificial intelligence; PREDICTION;
D O I
10.1007/s00146-020-00996-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst (Calif L REV 104: 671-732, 2016) and Pedreschi et al. (2007). The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment Office of Probation and Pretrial Services (2011) or decision-making process when designing public policies Lange (2015). The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems in artificial intelligence by analyzing available academic and scientific literature up to 2020. To achieve this goal, we have gathered available materials at the Web of Science and Scopus from last 5 years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.
引用
收藏
页码:197 / 203
页数:7
相关论文
共 27 条
[1]  
[Anonymous], 2018, STAT ART INT ROB AUT
[2]  
[Anonymous], 2018, The Toronto Declaration: Protecting the Right to Equality and Non-discrimination in Machine Learning Systems
[3]  
[Anonymous], 2018, MONTR DECL RESP DEV
[4]   A comparison of artificial neural networks learning algorithms in predicting tendency for suicide [J].
Ayat, Saeed ;
Farahani, Hojjat A. ;
Aghamohamadi, Mehdi ;
Alian, Mahmood ;
Aghamohamadi, Somayeh ;
Kazemi, Zeynab .
NEURAL COMPUTING & APPLICATIONS, 2013, 23 (05) :1381-1386
[5]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[6]  
Cem Geyik S, 2019, FAIRNESS AWARE RANKI
[7]   Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments [J].
Chouldechova, Alexandra .
BIG DATA, 2017, 5 (02) :153-163
[8]  
Cofone I N., 2019, SMU Law Review, V72, P139
[9]   The Promise of Differential Privacy A Tutorial on Algorithmic Techniques [J].
Dwork, Cynthia .
2011 IEEE 52ND ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2011), 2011, :1-2
[10]   Certifying and Removing Disparate Impact [J].
Feldman, Michael ;
Friedler, Sorelle A. ;
Moeller, John ;
Scheidegger, Carlos ;
Venkatasubramanian, Suresh .
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, :259-268