Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

被引:6
作者
Venkit, Pranav Narayanan [1 ]
Gautam, Sanjana [1 ]
Panchanadikar, Ruchi [1 ]
Huang, Ting-Hao 'Kenneth' [1 ]
Wilson, Shomir [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
来源
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023 | 2023年
关键词
Natural Language Processing; Ethics in AI; Nationality Bias; HCI;
D O I
10.1145/3600211.3604667
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the fairness and justice of AI systems. Our study employs a two-step mixed-methods approach that includes both quantitative and qualitative analysis to identify and understand the impact of nationality bias in a text generation model. Through our human-centered quantitative analysis, we measure the extent of nationality bias in articles generated by AI sources. We then conduct open-ended interviews with participants, performing qualitative coding and thematic analysis to understand the implications of these biases on human readers. Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting. The qualitative analysis from our interviews offers insights into the experience readers have when encountering such articles, highlighting the potential to shift a reader's perception of a country. These findings emphasize the critical role of public perception in shaping AI's impact on society and the need to correct biases in AI systems.
引用
收藏
页码:554 / 565
页数:12
相关论文
共 69 条
[1]   Persistent Anti-Muslim Bias in Large Language Models [J].
Abid, Abubakar ;
Farooqi, Maheen ;
Zou, James .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :298-306
[2]  
Ali Muhammad, 2019, Proceedings of the ACM on Human-Computer Interaction, V3, DOI [10.1145/3359301, 10.1145/3359301]
[3]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[4]  
Aoun Joseph E, 2018, Optimism and Anxiety: Views on the Impact of Artificial Intelligence and Higher Education's Response, V2018
[5]  
Arm Ltd, 2017, AI Today, AI Tomorrow. Awareness and Anticipation of AI: A Global Perspective
[6]  
Banaji M.R., 2016, Blindspot: Hidden biases of good people
[7]   Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating Societal Impacts of Algorithmic Decision Making [J].
Barnett, Julia ;
Diakopoulos, Nicholas .
PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, :56-67
[8]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[9]  
Blackwell Lindsay, 2017, Proceedings of the ACM on Human-Computer Interaction, V1, DOI [10.1145/3134659, 10.1145/3134659]
[10]  
Blodgett SL., 2020, NLP, P5454, DOI DOI 10.18653/V1/2020.ACLMAIN.485