Typology of Risks of Generative Text-to-Image Models

被引:38
作者
Bird, Charlotte [1 ]
Ungless, Eddie L. [1 ]
Kasirzadeh, Atoosa [2 ]
机构
[1] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
[2] Univ Edinburgh, Alan Turing Inst, Edinburgh, Midlothian, Scotland
来源
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023 | 2023年
基金
英国艺术与人文研究理事会;
关键词
Generative AI; Generative models; Text-to-Image models; Responsible AI; AI ethics; AI safety; AI governance; AI risks; ETHNICITY; HEALTH;
D O I
10.1145/3600211.3604722
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Mid-journey, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models.
引用
收藏
页码:396 / 410
页数:15
相关论文
共 180 条
[1]  
Ackermann Johannes., High-resolution image editing via multistage blended diffusion, 2022
[2]  
Adjer H., 2019, Technical Report
[3]  
Agathokleous Evgenios, 2023, One hundred important questions facing plant science derived using a large language model, DOI [10.1016/j.tplants.TrendsinPlantScience2023.06.008, DOI 10.1016/J.TPLANTS.TRENDSINPLANTSCIENCE2023.06.008]
[4]  
Akers J, 2019, Arxiv, DOI [arXiv:1812.09383, 10.48550/arXiv1812.09383, DOI 10.48550/ARXIV.1812.09383, 10.48550/arXiv.1812.09383arXiv]
[5]   Deepfakes Generation and Detection: A Short Survey [J].
Akhtar, Zahid .
JOURNAL OF IMAGING, 2023, 9 (01)
[6]  
Albantakis L., 2022, arXiv, DOI [DOI 10.48550/ARXIV.2212, 10.48550/arXiv.2212]
[7]  
Bai YT, 2022, Arxiv, DOI arXiv:2212.08073
[8]  
Baidoo-Anu D., 2023, J. AI, DOI [10.2139/ssrn.4337484, DOI 10.2139/SSRN.4337484]
[9]  
Bansal H, 2022, Arxiv, DOI [arXiv:2210.15230, DOI 10.48550/ARXIV.2210.15230]
[10]  
Barocas S., 2017, special interest group for computing. Information and Society (SIGCIS)