Typology of Risks of Generative Text-to-Image Models

被引:38
作者
Bird, Charlotte [1 ]
Ungless, Eddie L. [1 ]
Kasirzadeh, Atoosa [2 ]
机构
[1] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
[2] Univ Edinburgh, Alan Turing Inst, Edinburgh, Midlothian, Scotland
来源
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023 | 2023年
基金
英国艺术与人文研究理事会;
关键词
Generative AI; Generative models; Text-to-Image models; Responsible AI; AI ethics; AI safety; AI governance; AI risks; ETHNICITY; HEALTH;
D O I
10.1145/3600211.3604722
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Mid-journey, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models.
引用
收藏
页码:396 / 410
页数:15
相关论文
共 180 条
[81]  
Jankowicz Nina, 2021, Technical Report
[82]  
Jurafsky Dan, 2022, Picking on the same person: Does algorithmic monoculture lead to outcome homogenization?
[83]  
Jursenas Alfonsas, 2021, Technical Report
[84]  
Kapelyukh I, 2022, Arxiv, DOI arXiv:2210.02438
[85]  
Kasirzadeh A, 2023, Philosophy & Technology, V36, DOI [10.1007/s13347-023-00606-x, DOI 10.1007/S13347-023-00606-X, 10.1007/s13347-023-00606-x]
[86]   Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy [J].
Kasirzadeh, Atoosa .
PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, :349-356
[87]   The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems [J].
Kasirzadeh, Atoosa ;
Klein, Colin .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :618-626
[88]  
Kazim Emre, 2022, AI and Ethics, V2022, P1
[89]   Deepfake attribution: On the source identification of artificially generated images [J].
Khoo, Brandon ;
Phan, Raphael C. -W. ;
Lim, Chern-Hong .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2022, 12 (03)
[90]   An Argumentation Approach for Resolving Privacy Disputes in Online Social Networks [J].
Kokciyan, Nadin ;
Yaglikci, Nefise ;
Yolum, Pinar .
ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2017, 17 (03)