Generative AI and the politics of visibility

被引:9
作者
Gillespie, Tarleton [1 ,2 ]
机构
[1] Microsoft Res, Cambridge, MA 02142 USA
[2] Cornell Univ, Dept Commun, Ithaca, NY 14850 USA
来源
BIG DATA & SOCIETY | 2024年 / 11卷 / 02期
关键词
Generative AI; representation; bias; normativity; media; markedness; TELEVISION;
D O I
10.1177/20539517241252131
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Proponents of generative AI tools claim they will supplement, even replace, the work of cultural production. This raises questions about the politics of visibility: what kinds of stories do these tools tend to generate, and what do they generally not? Do these tools match the kind of diversity of representation that marginalized populations and non-normative communities have fought to secure in publishing and broadcast media? I tested three widely available generative AI tools with prompts designed to reveal these normative assumptions; I prompted the tools multiple times with each, to track the diversity of the outputs to the same query. I demonstrate that, as currently designed and trained, generative AI tools tend to reproduce normative identities and narratives, rarely representing less common arrangements and perspectives. When they do generate variety, it is often narrow, maintaining deeper normative assumptions in what remains absent.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Persistent Anti-Muslim Bias in Large Language Models
    Abid, Abubakar
    Farooqi, Maheen
    Zou, James
    [J]. AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 298 - 306
  • [2] [Anonymous], 2014, Data and discrimination: converting critical concerns into productive inquiry
  • [3] Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits
    Bandy J.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
  • [4] Barocas S., 2017, 9 ANN C SPECIAL INTE
  • [5] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Bender, Emily M.
    Gebru, Timnit
    McMillan-Major, Angelina
    Shmitchell, Shmargaret
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 610 - 623
  • [6] Brock A.L., 2020, Distributed blackness: African American cybercultures
  • [7] Buolamwini Joy., 2018, PROC 1 C FAIRNESS AC, V81, P1
  • [8] Butsch R., 2017, Media and Class, VRoutledge, P38
  • [9] Narrative responsibility and artificial intelligence How AI challenges human responsibility and sense-making
    Coeckelbergh, Mark
    [J]. AI & SOCIETY, 2023, 38 (06) : 2437 - 2450
  • [10] Costanza-Chock S., 2022, P 2022 ACM C FAIRN A, P1571, DOI DOI 10.1145/3531146.3533213