Understanding and evaluating harms of AI-generated image captions in political images

被引:0
|
作者
Sarhan, Habiba [1 ]
Hegelich, Simon [1 ]
机构
[1] Tech Univ Munich, Polit Data Sci, Munich, Germany
来源
关键词
AI-generated image captions; representational harms; inclusion; responsible AI; AI harms frontiers;
D O I
10.3389/fpos.2023.1245684
中图分类号
D81 [国际关系];
学科分类号
030207 ;
摘要
The use of AI-generated image captions has been increasing. Scholars of disability studies have long studied accessibility and AI issues concerning technology bias, focusing on image captions and tags. However, less attention has been paid to the individuals and social groups depicted in images and captioned using AI. Further research is needed to understand the underlying representational harms that could affect these social groups. This paper investigates the potential representational harms to social groups depicted in images. There is a high risk of harming certain social groups, either by stereotypical descriptions or erasing their identities from the caption, which could affect the understandings, beliefs, and attitudes that people hold about these specific groups. For the purpose of this article, 1,000 images with human-annotated captions were collected from news agencies "politics" sections. Microsoft's Azure Cloud Services was used to generate AI-generated captions with the December 2021 public version. The pattern observed from the politically salient images gathered and their captions highlight the tendency of the model used to generate more generic descriptions, which may potentially harm misrepresented social groups. Consequently, a balance between those harms needs to be struck, which is intertwined with the trade-off between generating generic vs. specific descriptions. The decision to generate generic descriptions, being extra cautious not to use stereotypes, erases and demeans excluded and already underrepresented social groups, while the decision to generate specific descriptions stereotypes social groups as well as reifies them. The appropriate trade-off is, therefore, crucial, especially when examining politically salient images.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Addressing the harms of AI-generated inauthentic content
    Menczer, Filippo
    Crandall, David
    Ahn, Yong-Yeol
    Kapadia, Apu
    NATURE MACHINE INTELLIGENCE, 2023, 5 (7) : 679 - 680
  • [2] Addressing the harms of AI-generated inauthentic content
    Filippo Menczer
    David Crandall
    Yong-Yeol Ahn
    Apu Kapadia
    Nature Machine Intelligence, 2023, 5 : 679 - 680
  • [3] A Human-factors Approach for Evaluating AI-generated Images
    Combs, Kara
    Bihl, Trevor J.
    Gadre, Arya
    Christopherson, Isaiah
    PROCEEDINGS OF THE 2024 COMPUTERS AND PEOPLE RESEARCH CONFERENCE, SIGMIS-CPR 2024, 2024,
  • [4] Global-Local Image Perceptual Score (GLIPS): Evaluating Photorealistic Quality of AI-Generated Images
    Aziz, Memoona
    Rehman, Umair
    Danish, Muhammad Umair
    Grolinger, Katarina
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2025, 55 (02) : 223 - 233
  • [5] Unnatural Images: On AI-Generated Photographs
    Wasielewski, Amanda
    CRITICAL INQUIRY, 2024, 51 (01) : 1 - 29
  • [6] Online Detection of AI-Generated Images
    Epstein, David C.
    Jain, Ishan
    Wang, Oliver
    Zhang, Richard
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 382 - 392
  • [7] Racial bias in AI-generated images
    Yang, Yiran
    AI & SOCIETY, 2025,
  • [8] An Analysis of the Copyrightability of AI-Generated Images
    Zheng Xianfang
    Xing Ziran
    Contemporary Social Sciences, 2024, 9 (06) : 100 - 114
  • [9] Advances in AI-Generated Images and Videos
    Bougueffa, Hessen
    Keita, Mamadou
    Hamidouche, Wassim
    Taleb-Ahmed, Abdelmalik
    Liz-Lopez, Helena
    Martin, Alejandro
    Camacho, David
    Hadid, Abdenour
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2024, 9 (01):
  • [10] Gender stereotypes in AI-generated images
    Garcia-Ull, Francisco-Jose
    Melero-Lazaro, Monica
    PROFESIONAL DE LA INFORMACION, 2023, 32 (05):