Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings

被引:0
作者
Georgina Curto
Mario Fernando Jojoa Acosta
Flavio Comim
Begoña Garcia-Zapirain
机构
[1] Universitat Ramon Llull,eVida Research Laboratory
[2] IQS School of Management,undefined
[3] Universitat Autònoma de Barcelona,undefined
[4] EINA Centre Universitari de Disseny i Art,undefined
[5] University of Deusto,undefined
来源
AI & SOCIETY | 2024年 / 39卷
关键词
Bias; Artificial intelligence; Embeddings; Poverty;
D O I
暂无
中图分类号
学科分类号
摘要
Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies.
引用
收藏
页码:617 / 632
页数:15
相关论文
共 103 条
  • [1] Adamuthe AC(2020)Improved text classification using long short-term memory and word embedding technique Int J Hybrid Inf Technol 108 521-554
  • [2] Aggarwal N(2020)The norms of algorithmic credit scoring SSRN Electron J 109 287-337
  • [3] Alesina A(2018)Intergenerational mobility and preferences for redistribution Am Econ Rev 58 189-195
  • [4] Stantcheva S(1999)What is the point of equality? Ethics 5 327-350
  • [5] Teso E(2021)Financial technology with AI-enabled and ethical challenges Society 7 1-37
  • [6] Anderson ES(2001)The influence of perceived deservingness on policy decisions regarding aid to the poor Polit Psychol 5 135-146
  • [7] Anshari M(1997)Egalitarianism and the undeserving poor J Polit Philos 356 183-186
  • [8] Almunawar MN(2019)How stereotypes are shared through language: a review and introduction of the Social Categories and Stereotypes Communication (SCSC) framework Rev Commun Res 3 34-1623
  • [9] Masri M(2016)Enriching word vectors with subword information Trans Assoc Comput Linguist 129 1553-3640
  • [10] Hrdy M(2017)Semantics derived automatically from language corpora contain human-like biases Science 34 3633-1299