More human than human: measuring ChatGPT political bias

被引:0
|
作者
Fabio Motoki
Valdemar Pinho Neto
Victor Rodrigues
机构
[1] University of East Anglia,Norwich Business School
[2] FGV EPGE and FGV CEEE,undefined
[3] Nova Educação,undefined
来源
Public Choice | 2024年 / 198卷
关键词
Bias; Political bias; Large language models; ChatGPT; C10; C89; D83; L86; Z00;
D O I
暂无
中图分类号
学科分类号
摘要
We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
引用
收藏
页码:3 / 23
页数:20
相关论文
共 50 条
  • [31] ChatGPT as an Assistive Technology: Enhancing Human-Computer Interaction for People with Speech Impairments
    Roy, Ayon
    Karim, Enamul
    Bin Farukee, Minhaz
    Makedon, Fillia
    17TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2024, 2024, : 63 - 66
  • [32] From ELIZA to ChatGPT History of Human-Computer Conversation
    Rajaraman, V.
    RESONANCE-JOURNAL OF SCIENCE EDUCATION, 2023, 28 (06): : 889 - 905
  • [33] The impact of ChatGPT on human skills: A quantitative study on twitter data
    Giordano, Vito
    Spada, Irene
    Chiarello, Filippo
    Fantoni, Gualtiero
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2024, 203
  • [34] ChatGPT: perspectives from human-computer interaction and psychology
    Liu, Jiaxi
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [35] Application of ChatGPT-Based Digital Human in Animation Creation
    Lan, Chong
    Wang, Yongsheng
    Wang, Chengze
    Song, Shirong
    Gong, Zheng
    FUTURE INTERNET, 2023, 15 (09):
  • [36] Face to face: Comparing ChatGPT with human performance on face matching
    Kramer, Robin S. S.
    PERCEPTION, 2025, 54 (01) : 65 - 68
  • [37] Leveraging ChatGPT in public sector human resource management education
    Allgood, Michelle
    Musgrave, Paul
    JOURNAL OF PUBLIC AFFAIRS EDUCATION, 2024, 30 (03) : 494 - 510
  • [38] Assessing ChatGPT's ability to emulate human reviewers in scientific research: A descriptive and qualitative approach
    Suleiman, Aiman
    von Wedel, Dario
    Munoz-Acuna, Ricardo
    Redaelli, Simone
    Santarisi, Abeer
    Seibold, Eva-Lotte
    Ratajczak, Nikolai
    Kato, Shinichiro
    Said, Nader
    Sundar, Eswar
    Goodspeed, Valerie
    Schaefer, Maximilian S.
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 254
  • [39] Is acupuncture no more than a placebo? Extensive discussion required about possible bias (Review)
    Deng, Shizhe
    Zhao, Xiaofeng
    Du, Rong
    He, Si
    Wen, Yan
    Huang, Linghui
    Tian, Guang
    Zhang, Chao
    Meng, Zhihong
    Shi, Xuemin
    EXPERIMENTAL AND THERAPEUTIC MEDICINE, 2015, 10 (04) : 1247 - 1252
  • [40] the construction of human rights: accounting for systematic bias in common human rights measures
    mark david nieman
    jonathan j ring
    European Political Science, 2015, 14 : 473 - 495