More human than human: measuring ChatGPT political bias

被引:114
作者
Motoki, Fabio [1 ]
Neto, Valdemar Pinho [2 ]
Rodrigues, Victor [3 ]
机构
[1] Univ East Anglia, Norwich Business Sch, Norwich Res Pk, Norwich NR4 7TJ, England
[2] FGV EPGE & FGV CEEE, Praia Botafogo 190, BR-22250900 Rio De Janeiro, RJ, Brazil
[3] Nova Educ, R Jose Vilar 1707, BR-60125025 Fortaleza, CE, Brazil
关键词
Bias; Political bias; Large language models; ChatGPT; MEDIA BIAS; POLARIZATION;
D O I
10.1007/s11127-023-01097-2
中图分类号
F [经济];
学科分类号
02 ;
摘要
We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
引用
收藏
页码:3 / 23
页数:21
相关论文
共 57 条
[1]  
Acemoglu D., 2021, Harms of AI Working Paper
[2]  
Aher G., 2023, ARXIV
[3]  
AI Now Institute, 2023, AI NOW 2019 REP
[4]  
Akyurek A. F., 2022, ARXIV
[5]  
[Anonymous], 2017, ResNet graph
[6]  
Argyle Lisa P., 2022, ArXiv:2209.06899 cs, P819, DOI [DOI 10.18653/V1/2022.ACL-LONG.60, 10.18653/v1/2022.acl-long.60]
[7]   When Left Is Right and Right Is Left: The Psychological Correlates of Political Ideology in China [J].
Beattie, Peter ;
Chen, Rong ;
Bettache, Karim .
POLITICAL PSYCHOLOGY, 2022, 43 (03) :457-488
[8]   Political polarization and the electoral effects of media bias [J].
Bernhardt, Dan ;
Krasa, Stefan ;
Polborn, Mattias .
JOURNAL OF PUBLIC ECONOMICS, 2008, 92 (5-6) :1092-1104
[9]   ChatGPT: five priorities for research [J].
Bockting, Claudi ;
van Dis, Eva A. M. ;
Bollen, Johan ;
van Rooij, Robert ;
Zuidema, Willem L. .
NATURE, 2023, 614 (7947) :224-226
[10]  
Brand J., 2023, Using gpt for market research