Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development

被引:2
作者
Rismani, Shalaleh [1 ]
Shelby, Renee [2 ]
Smart, Andrew [2 ]
Delos Santos, Renelito [2 ]
Moon, Ajung [1 ]
Rostamzadeh, Negar [2 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Google Res, San Francisco, CA USA
来源
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023 | 2023年
基金
加拿大自然科学与工程研究理事会;
关键词
Safety engineering; T2I generative models; Responsible ML; Art;
D O I
10.1145/3600211.3604685
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks - both of which are not designed explicitly examine social and ethical risks - can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.
引用
收藏
页码:70 / 83
页数:14
相关论文
共 83 条
  • [1] [Anonymous], 2022, Technical Report
  • [2] The promise of large language models in health care
    Arora, Anmol
    Arora, Ananya
    [J]. LANCET, 2023, 401 (10377) : 641 - 642
  • [3] Bandara Pesala, 2022, New Tool Allows Users to See Bias in AI Image Generators
  • [4] Algorithmic injustice: a relational ethics approach
    Birhane, Abeba
    [J]. PATTERNS, 2021, 2 (02):
  • [5] Birhane Abeba, 2021, Multimodal datasets: Misogyny, pornography, and malignant stereotypes, DOI DOI 10.48550/ARXIV.2110.01963
  • [6] Responsible Language Technologies: Foreseeing and Mitigating Harms
    Blodgett, Su Lin
    Liao, Q. Vera
    Olteanu, Alexandra
    Mihalcea, Rada
    Muller, Michael
    Scheuerman, Morgan Klaus
    Tan, Chenhao
    Yang, Qian
    [J]. EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [7] Bommasani R., 2021, arXiv, DOI 10.48550/arXiv.2108.07258
  • [8] Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation
    Carli, Rachele
    Najjar, Amro
    Calvaresi, Davide
    [J]. EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2022, 2022, 13283 : 204 - 220
  • [9] Carlson CS., 2012, Effective FMEAs: Achieving safe, reliable, and economical products and processes using failure mode and effects analysis
  • [10] Art by Computing Machinery: Is Machine Art Acceptable in the Artworld?
    Ch'ng, Eugene
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)