AI Empire: Unraveling the interlocking systems of oppression in generative AI's global order

被引:20
|
作者
Tacheva, Jasmina [1 ,3 ]
Ramasubramanian, Srividya [2 ]
机构
[1] Syracuse Univ, Sch Informat Studies, Syracuse, NY USA
[2] Syracuse Univ, Newhouse Sch Publ Commun, Syracuse, NY USA
[3] Syracuse Univ, Sch Informat Studies, Syracuse, NY 13210 USA
来源
BIG DATA & SOCIETY | 2023年 / 10卷 / 02期
关键词
AI Empire; generative AI; critical AI; intersectionality; algorithmic oppression; and data colonialism;
D O I
10.1177/20539517231219241
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
As artificial intelligence (AI) continues to captivate the collective imagination through the latest generation of generative AI models such as DALL-E and ChatGPT, the dehumanizing and harmful features of the technology industry that have plagued it since its inception only seem to deepen and intensify. Far from a "glitch" or unintentional error, these endemic issues are a function of the interlocking systems of oppression upon which AI is built. Using the analytical framework of "Empire," this paper demonstrates that we live not simply in the "age of AI" but in the age of AI Empire. Specifically, we show that this networked and distributed global order is rooted in heteropatriarchy, racial capitalism, white supremacy, and coloniality and perpetuates its influence through the mechanisms of extractivism, automation, essentialism, surveillance, and containment. Therefore, we argue that any attempt at reforming AI from within the same interlocking oppressive systems that created it is doomed to failure and, moreover, risks exacerbating existing harm. Instead, to advance justice, we must radically transform not just the technology itself, but our ideas about it, and develop it from the bottom up, from the perspectives of those who stand the most risk of being harmed.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Using generative AI as decision-support tools: unraveling users' trust and AI appreciation
    Huynh, Minh-Tay
    JOURNAL OF DECISION SYSTEMS, 2024,
  • [2] Disinfecting AI: Mitigating Generative AI's Top Risks
    Campbell, Mark
    Jovanovic, Mladan
    COMPUTER, 2024, 57 (05) : 111 - 116
  • [3] Pivot points for generative AI in a global world
    Grover, Varun
    JOURNAL OF GLOBAL INFORMATION TECHNOLOGY MANAGEMENT, 2024, 27 (04) : 255 - 258
  • [4] Unraveling generative AI in BBC News: application, impact, literacy and governance
    Lao, Yucong
    You, Yukun
    TRANSFORMING GOVERNMENT- PEOPLE PROCESS AND POLICY, 2024,
  • [5] Generative AI for Self-Healing Systems
    Khlaisamniang, Pitikorn
    Khomduean, Prachaya
    Saetan, Kriangkrai
    Wonglapsuwan, Supasin
    2023 18TH INTERNATIONAL JOINT SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND NATURAL LANGUAGE PROCESSING, ISAI-NLP, 2023,
  • [6] Generative AI systems in healthcare: Challenges and prospects
    Nordlinger, Bernard
    Kirchner, Claude
    de Fresnoye, Olivier
    BULLETIN DE L ACADEMIE NATIONALE DE MEDECINE, 2024, 208 (05): : 536 - 547
  • [7] The global landscape of academic guidelines for generative AI and LLMs
    Jiao, Junfeng
    Afroogh, Saleh
    Chen, Kevin
    Atkinson, David
    Dhurandhar, Amit
    NATURE HUMAN BEHAVIOUR, 2025, : 638 - 642
  • [8] NEC’s AI Supercomputer: One of the Largest in Japan to Support Generative AI
    Kitano, Takatoshi
    NEC Technical Journal, 2024, 17 (02): : 59 - 63
  • [9] Generative AI's recolonization of EFL classrooms
    Stewart, Nicola
    Zheng, Yangsheng
    AUSTRALIAN REVIEW OF APPLIED LINGUISTICS, 2024, 47 (03) : 383 - 409
  • [10] Place identity: a generative AI's perspective
    Jang, Kee Moon
    Chen, Junda
    Kang, Yuhao
    Kim, Junghwan
    Lee, Jinhyung
    Duarte, Fabio
    Ratti, Carlo
    HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS, 2024, 11 (01):